• Plug and Play: Build a G-Assist Plug-In Today

    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
    NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels.

    G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow.
    Below, find popular G-Assist plug-ins, hackathon details and tips to get started.
    Plug-In and Win
    Join the hackathon by registering and checking out the curated technical resources.
    G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation.
    For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins.
    To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code.
    Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action.
    Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16.
    Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in.
    Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit.
    Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver.
    Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows.

    Popular plug-ins include:

    Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay.
    Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.
    IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device.
    Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists.
    Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more.

    Get G-Assist 
    Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff.
    the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session.
    Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities.
    Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process.
    NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #plug #play #build #gassist #plugin
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #plug #play #build #gassist #plugin
    BLOGS.NVIDIA.COM
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file (plugin.py), requirements.txt, manifest.json, config.json (if applicable), a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU (Intel Pentium G Series, Core i3, i5, i7 or higher; AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher), specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-In(spiration) Explore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist(ance)  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. Save the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Wow
    Love
    Sad
    25
    0 Commentarios 0 Acciones
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Commentarios 0 Acciones
  • India is using AI and satellites to map urban heat vulnerability in cities like Delhi. They’re identifying which buildings are most at risk from extreme temperatures. This effort seems to aim at providing relief to those affected. It’s all very technical and detailed, but honestly, it feels a bit over the top.

    Just another day of high-tech solutions for problems that keep piling up.

    #UrbanHeat #India #ArtificialIntelligence #Satellites #HeatVulnerability
    India is using AI and satellites to map urban heat vulnerability in cities like Delhi. They’re identifying which buildings are most at risk from extreme temperatures. This effort seems to aim at providing relief to those affected. It’s all very technical and detailed, but honestly, it feels a bit over the top. Just another day of high-tech solutions for problems that keep piling up. #UrbanHeat #India #ArtificialIntelligence #Satellites #HeatVulnerability
    India Is Using AI and Satellites to Map Urban Heat Vulnerability Down to the Building Level
    Remote-sensing data and artificial intelligence are mapping the most heat-vulnerable buildings in cities like Delhi, in an effort to target relief from extreme temperatures at a granular level.
    1 Commentarios 0 Acciones
  • Take a Look at Procedural Ivy in This Dreamlike 3D Scene

    3D Artist Nick Carver, known for his outstanding stylized artwork, unveiled a new whimsical scene, showing fascinating procedural ivy.The artist stayed true to his signature style, with dreamlike colors and charming hand-painted aesthetics, featuring richly detailed set dressing and high-quality animation.Earlier, Nick Carver showcased this splendid character study, a peaceful 3D scene with a calm river, and more:Follow the artist on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #take #look #procedural #ivy #this
    Take a Look at Procedural Ivy in This Dreamlike 3D Scene
    3D Artist Nick Carver, known for his outstanding stylized artwork, unveiled a new whimsical scene, showing fascinating procedural ivy.The artist stayed true to his signature style, with dreamlike colors and charming hand-painted aesthetics, featuring richly detailed set dressing and high-quality animation.Earlier, Nick Carver showcased this splendid character study, a peaceful 3D scene with a calm river, and more:Follow the artist on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #take #look #procedural #ivy #this
    80.LV
    Take a Look at Procedural Ivy in This Dreamlike 3D Scene
    3D Artist Nick Carver, known for his outstanding stylized artwork, unveiled a new whimsical scene, showing fascinating procedural ivy.The artist stayed true to his signature style, with dreamlike colors and charming hand-painted aesthetics, featuring richly detailed set dressing and high-quality animation.Earlier, Nick Carver showcased this splendid character study, a peaceful 3D scene with a calm river, and more:Follow the artist on X/Twitter and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    472
    0 Commentarios 0 Acciones
  • Your next nonfiction book could write itself, but you’ll own the rights

    TL;DR: Turn ideas into full-length books with AI—lifetime access for just Writing a book takes time—something most of us don’t have between inbox chaos and back-to-back meetings. But what if all you needed was an idea? That’s where YouBooks steps in. This AI-powered tool helps you generate full-length nonfiction books with just a few prompts, and right now, you can lock in lifetime access for.
    YouBooks pulls from several top-tier AI models, like ChatGPT, Claude, and Gemini, and combines them with live web research to build out detailed, structured manuscripts up to 300,000 words. Whether you want to write about productivity, startup culture, parenting, or personal finance, feed in your topic and let the AI do the heavy lifting.
    Why is Youbooks for you?

    150,000 credits per monthDownloadable formats: PDF, DOCX, EPUB
    Commercial rights so that you can sell, share, or publish your books
    Custom style options to match your tone or brand

    It’s a serious time-saver if you’ve been sitting on an idea forever or want to build a content empire without writing every word yourself. Plus, unlike many AI tools, YouBooks gives you full ownership of the content you create.

    Snag a lifetime subscription to YouBooks for  and start turning your thoughts into fully formed nonfiction books: no ghostwriters, no subscriptions, and no gatekeepers.

    Youbooks – AI Nonfiction Book Generator: Lifetime SubscriptionSee Deal
    StackSocial prices subject to change.
    #your #next #nonfiction #book #could
    Your next nonfiction book could write itself, but you’ll own the rights
    TL;DR: Turn ideas into full-length books with AI—lifetime access for just Writing a book takes time—something most of us don’t have between inbox chaos and back-to-back meetings. But what if all you needed was an idea? That’s where YouBooks steps in. This AI-powered tool helps you generate full-length nonfiction books with just a few prompts, and right now, you can lock in lifetime access for. YouBooks pulls from several top-tier AI models, like ChatGPT, Claude, and Gemini, and combines them with live web research to build out detailed, structured manuscripts up to 300,000 words. Whether you want to write about productivity, startup culture, parenting, or personal finance, feed in your topic and let the AI do the heavy lifting. Why is Youbooks for you? 150,000 credits per monthDownloadable formats: PDF, DOCX, EPUB Commercial rights so that you can sell, share, or publish your books Custom style options to match your tone or brand It’s a serious time-saver if you’ve been sitting on an idea forever or want to build a content empire without writing every word yourself. Plus, unlike many AI tools, YouBooks gives you full ownership of the content you create. Snag a lifetime subscription to YouBooks for  and start turning your thoughts into fully formed nonfiction books: no ghostwriters, no subscriptions, and no gatekeepers. Youbooks – AI Nonfiction Book Generator: Lifetime SubscriptionSee Deal StackSocial prices subject to change. #your #next #nonfiction #book #could
    WWW.PCWORLD.COM
    Your next nonfiction book could write itself, but you’ll own the rights
    TL;DR: Turn ideas into full-length books with AI—lifetime access for just $49. Writing a book takes time—something most of us don’t have between inbox chaos and back-to-back meetings. But what if all you needed was an idea? That’s where YouBooks steps in. This AI-powered tool helps you generate full-length nonfiction books with just a few prompts, and right now, you can lock in lifetime access for $49 (reg. $540). YouBooks pulls from several top-tier AI models, like ChatGPT, Claude, and Gemini, and combines them with live web research to build out detailed, structured manuscripts up to 300,000 words. Whether you want to write about productivity, startup culture, parenting, or personal finance, feed in your topic and let the AI do the heavy lifting. Why is Youbooks for you? 150,000 credits per month (1 word = 1 credit) Downloadable formats: PDF, DOCX, EPUB Commercial rights so that you can sell, share, or publish your books Custom style options to match your tone or brand It’s a serious time-saver if you’ve been sitting on an idea forever or want to build a content empire without writing every word yourself. Plus, unlike many AI tools, YouBooks gives you full ownership of the content you create. Snag a lifetime subscription to YouBooks for $49 and start turning your thoughts into fully formed nonfiction books: no ghostwriters, no subscriptions, and no gatekeepers. Youbooks – AI Nonfiction Book Generator: Lifetime SubscriptionSee Deal StackSocial prices subject to change.
    Like
    Love
    Wow
    Sad
    Angry
    516
    0 Commentarios 0 Acciones
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Commentarios 0 Acciones
  • Monitoring and Support Engineer at Keyword Studios

    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    Like
    Love
    Wow
    Sad
    Angry
    559
    0 Commentarios 0 Acciones
  • Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Commentarios 0 Acciones
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Commentarios 0 Acciones
  • THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It

    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    #this #unexpected #rug #trend #taking
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok. #this #unexpected #rug #trend #taking
    WWW.HOUSEBEAUTIFUL.COM
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you design (or redesign) a room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application (such as functioning as a wall mural), offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collection (meaning “expression of my heart” in Hindi) comprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed (in the case of renters), a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    Like
    Love
    Wow
    Sad
    Angry
    465
    2 Commentarios 0 Acciones
Resultados de la búsqueda