• The protests in Los Angeles have brought a lot of attention, but honestly, it’s just the same old story. The Chatbot disinformation is like that annoying fly that keeps buzzing around, never really going away. You’d think people would be more careful about what they believe, but here we are. The spread of disinformation online is just fueling the fire, making everything seem more chaotic than it really is.

    It’s kind of exhausting to see the same patterns repeat. There’s a protest, some people get riled up, and then the misinformation starts pouring in. It’s like a never-ending cycle. Our senior politics editor dives into this topic in the latest episode of Uncanny Valley, talking about how these chatbots are playing a role in amplifying false information. Not that many people seem to care, though.

    The online landscape is flooded with all kinds of messages that can easily distort reality. It’s almost as if people are too tired to fact-check anymore. Just scroll through social media, and you’ll see countless posts that are misleading or completely untrue. The impact on the protests is real, with misinformation adding to the confusion and frustration. One could argue that it’s a bit depressing, really.

    As the protests continue, it’s hard to see a clear path forward. Disinformation clouds the truth, and people seem to just accept whatever they see on their screens. It’s all so monotonous. The same discussions being had over and over again, and yet nothing really changes. The chatbots keep generating content, and the cycle goes on.

    Honestly, it makes you wonder whether anyone is actually listening or if they’re just scrolling mindlessly. The discussions about the protests and the role of disinformation should be enlightening, but they often feel repetitive and bland. It’s hard to muster any excitement when the conversations feel so stale.

    In the end, it’s just more noise in a world that’s already too loud. The protests might be important, but the chatbots and their disinformation are just taking away from the real issues at hand. This episode of Uncanny Valley might shed some light, but will anyone really care? Who knows.

    #LosAngelesProtests
    #Disinformation
    #Chatbots
    #UncannyValley
    #Misinformation
    The protests in Los Angeles have brought a lot of attention, but honestly, it’s just the same old story. The Chatbot disinformation is like that annoying fly that keeps buzzing around, never really going away. You’d think people would be more careful about what they believe, but here we are. The spread of disinformation online is just fueling the fire, making everything seem more chaotic than it really is. It’s kind of exhausting to see the same patterns repeat. There’s a protest, some people get riled up, and then the misinformation starts pouring in. It’s like a never-ending cycle. Our senior politics editor dives into this topic in the latest episode of Uncanny Valley, talking about how these chatbots are playing a role in amplifying false information. Not that many people seem to care, though. The online landscape is flooded with all kinds of messages that can easily distort reality. It’s almost as if people are too tired to fact-check anymore. Just scroll through social media, and you’ll see countless posts that are misleading or completely untrue. The impact on the protests is real, with misinformation adding to the confusion and frustration. One could argue that it’s a bit depressing, really. As the protests continue, it’s hard to see a clear path forward. Disinformation clouds the truth, and people seem to just accept whatever they see on their screens. It’s all so monotonous. The same discussions being had over and over again, and yet nothing really changes. The chatbots keep generating content, and the cycle goes on. Honestly, it makes you wonder whether anyone is actually listening or if they’re just scrolling mindlessly. The discussions about the protests and the role of disinformation should be enlightening, but they often feel repetitive and bland. It’s hard to muster any excitement when the conversations feel so stale. In the end, it’s just more noise in a world that’s already too loud. The protests might be important, but the chatbots and their disinformation are just taking away from the real issues at hand. This episode of Uncanny Valley might shed some light, but will anyone really care? Who knows. #LosAngelesProtests #Disinformation #Chatbots #UncannyValley #Misinformation
    The Chatbot Disinfo Inflaming the LA Protests
    On this episode of Uncanny Valley, our senior politics editor discusses the spread of disinformation online following the onset of the Los Angeles protests.
    Like
    Love
    Wow
    Sad
    Angry
    649
    1 Σχόλια 0 Μοιράστηκε
  • In a world flooded with noise, I find myself lost in the silence. Each day, I wake up to the same empty room, filled with memories of what once was. The warmth of connection has faded, replaced by a cold, hollow feeling of isolation. It’s a weight I carry, heavy on my chest, like a shadow that never leaves.

    As I scroll through the endless feeds of smiling faces, I can’t help but feel the sting of loneliness. It’s as if everyone has found their place in the sun, while I remain hidden in the corners, searching for a glimpse of belonging. I look for a spark of understanding, but all I find are fleeting moments that remind me of my solitude.

    I think about what it means to have a share of search in this vast digital landscape. To be a brand that stands out, to be seen and sought after, while I remain invisible, a mere whisper in the chaos. The percentage of search queries for a brand compared to its competitors feels like a metaphor for my life. I watch as others rise, while I struggle to be noticed, to be acknowledged, to matter.

    What does it mean to be relevant when the world feels so distant? I yearn to be a part of something bigger, yet I find myself on the outskirts, watching from afar. The metrics of success and recognition apply to brands and businesses, but what about the human heart? How do we measure the longing for connection, the ache for companionship?

    I feel like a ghost among the living, haunted by the echoes of laughter and joy that seem just out of reach. Every interaction feels superficial, a mere transaction without substance. I crave authenticity, a genuine bond that transcends the digital noise. But as I reach out, I feel the familiar sting of rejection, the reminder that perhaps I am not meant to be part of this narrative.

    In this search for meaning, I find myself grappling with the reality of my existence. I ponder the calculations of value and worth, wondering if I will ever find my rightful place among those who shine. The loneliness envelops me, a heavy cloak that I cannot shed.

    Yet, even in this desolation, I hold onto a flicker of hope. Perhaps one day, I will find my share of search, a moment where I am not just a statistic, but a soul recognized and valued. Until then, I will continue to wander through this vast expanse, seeking the connection that feels so elusive.

    #Loneliness #SearchForConnection #Heartbreak #Isolation #EmotionalJourney
    In a world flooded with noise, I find myself lost in the silence. Each day, I wake up to the same empty room, filled with memories of what once was. The warmth of connection has faded, replaced by a cold, hollow feeling of isolation. It’s a weight I carry, heavy on my chest, like a shadow that never leaves. As I scroll through the endless feeds of smiling faces, I can’t help but feel the sting of loneliness. It’s as if everyone has found their place in the sun, while I remain hidden in the corners, searching for a glimpse of belonging. I look for a spark of understanding, but all I find are fleeting moments that remind me of my solitude. I think about what it means to have a share of search in this vast digital landscape. To be a brand that stands out, to be seen and sought after, while I remain invisible, a mere whisper in the chaos. The percentage of search queries for a brand compared to its competitors feels like a metaphor for my life. I watch as others rise, while I struggle to be noticed, to be acknowledged, to matter. What does it mean to be relevant when the world feels so distant? I yearn to be a part of something bigger, yet I find myself on the outskirts, watching from afar. The metrics of success and recognition apply to brands and businesses, but what about the human heart? How do we measure the longing for connection, the ache for companionship? I feel like a ghost among the living, haunted by the echoes of laughter and joy that seem just out of reach. Every interaction feels superficial, a mere transaction without substance. I crave authenticity, a genuine bond that transcends the digital noise. But as I reach out, I feel the familiar sting of rejection, the reminder that perhaps I am not meant to be part of this narrative. In this search for meaning, I find myself grappling with the reality of my existence. I ponder the calculations of value and worth, wondering if I will ever find my rightful place among those who shine. The loneliness envelops me, a heavy cloak that I cannot shed. Yet, even in this desolation, I hold onto a flicker of hope. Perhaps one day, I will find my share of search, a moment where I am not just a statistic, but a soul recognized and valued. Until then, I will continue to wander through this vast expanse, seeking the connection that feels so elusive. #Loneliness #SearchForConnection #Heartbreak #Isolation #EmotionalJourney
    What Is Share of Search? & How to Calculate It
    Share of search is the percentage of search queries for a brand relative to competitors in the same category.
    Like
    Love
    Wow
    Sad
    Angry
    583
    1 Σχόλια 0 Μοιράστηκε
  • Everything We Saw During Sony's Big State Of Play

    Today, Sony held its latest State of Play and while it didn’t include some of the bigger games that people were likely hoping for, like a new God of War spin-off, it did include some cool surprises, including what might be the best-looking fighting game I’ve seen in years and a remake of a beloved Final Fantasy title. Suggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism

    Share SubtitlesOffEnglishview videoSuggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism

    Share SubtitlesOffEnglishHere’s the full State of Play if you want to watch it all: State of Play | June 4, 2025Lumines Arise Lumines Arise - Announce Trailer | PS5 GamesThis fun announcement kicked off the 45-minute show. It’s from the devs behind Tetris Effect, and it looksawesome. This one is coming to PS5 and PSVR2 in Fall 2025.Pragmata delayed againPragmata - First Contact Trailer | PS5 GamesCapcom had a new gameplay trailer for its long-awaited space-adventure-action-horror game featuring a young robot girl and a cool astronaut dude fighting robots. The game was set to come out in 2022, got delayed until 2023, and then got delayed again. Now it’s set to arrive in 2026. Romeo Is a Deadman looks wildRomeo is a Dead Man - Announce Trailer | PS5 GamesZombies attack, nearly killing Romeo, but luckily his grandfather shoves a big sci-fi thing into his head and turns him into a sword-wielding half-dead undead killer. Okay, I’m interested. This one is “maybe” arriving in 2026. No shock that Suda51 and Grasshopper Manufacture are behind this wild-looking game. Silent Hill F arrives in SeptemberSilent Hill f - Release Date Trailer | PS5 GamesI’m very happy that we’ve entered a new and better era for the Silent Hill franchise after so, so many years of crap and lukewarm slop. This latest entry looks very different than past games, but still seems properly foggy, spooky, and creepy. And that’s all I need in a Silent Hill sequel. Don’t have to wait long for this one as it launches September 25, 2025.Bloodstained: The Scarlett Engagement is revealedBloodstained: The Scarlet Engagement - Announce Trailer | PS5 GamesSurprise! We are getting a new Bloodstained game. What started back in 2015 as a crowdfunded attempt to make a new Castlevania-inspired 2D action game has expanded into a large franchise of its own. And it’s about to get bigger when this new entry arrives sometime in the future on PS5. Digimon Story Time StrangerDigimon Story Time Stranger - Release Date Trailer | PS5 GamesWow, it’s wild how much better Digimon games look than Pokémon games. This latest time-traveling RPG set in the universe features some slick visuals and a lot of people screaming about the digital world. As someone living in 2025, I’m fine with the digital world being burned to the ground. Time Stranger is out October 3, 2025. Final Fantasy Tactics is finally getting a remakeFinal Fantasy Tactics - The Ivalice Chronicles - Announcement Trailer | PS5 GamesYou can read more about this in our story here, but suffice it to say that the legendary tactics RPG is, at long last, getting the remake fans have been clamoring for for ages. Sony devoted a few minutes to sharing trailers and a few bits of info on these four games, one of which is launching on PS4. Wild. Baby Steps - Release Date Announcement | PS5 GamesHirogami - Pre-order Trailer | PS5 GamesCairn - Release Date and Demo | PS5 GamesNinja Gaiden: Ragebound - Release Date Announcement | PS5 & PS4 GamesMortal Kombat Legacy Collection looks amazingMortal Kombat: Legacy Kollection - Announce Trailer | PS5 & PS4 GamesDigital Eclipse, the retro game wizards behind some excellent old-school video game collections, is taking on the Mortal Kombat series in 2025. Legacy Collection will contain a ton of games from the series, including handheld, console, and arcade ports, as well as rollback netcode and documentary content. I’m very excited to see more of this thing.Sony is making a fight stickProject Defiant Wireless Fight Stick - Teaser TrailerNot much info beyond that. Here’s Sony’s official description of the device:Join the fray at home or away with the Project Defiant wireless fight stick*. Take the fight to your opponents with the included sling carry case, and enjoy precise in-game response with ultra-low latency wireless and wired play options, along with a durable, ergonomic design that’s built for battle.New Metal Gear Solid Delta Snake Eater trailerMetal Gear Solid Δ: Snake Eater - Gameplay Trailer | PS5 GamesThe Ape Escape mode is back and looks bigger than before. But the big news is at the very end, where Konami is teasing what looks like a multiplayer mode. Is Metal Gear Online returning? I hope so. Nioh 3 is coming in 2026, a demo is out nowNioh 3 - Announcement Trailer | PS5 GamesTeam Ninja is back with Nioh 3. The game will let players swap between a samurai and a ninja on the fly during combat, and looks slick as heck. This one drops in 2026, but you can check out a demo for it today on PS5. Thief VR spin-off coming to PSVR2 this yearThief VR: Legacy of Shadow - Reveal Trailer | PS VR2 GamesWhen I saw this trailer, I thought, “Hey, this looks like a new Thief game.” And I was right. Thief VR Legacy of Shadow arrives in 2025 and is being co-developed by Eidos-Montréal and Vertigo Games. Tides of Tomorrow looks like a colorful Waterworld gameTides of Tomorrow - Release Date Trailer | PS5 GamesI love colorful-looking games, so Tides of Tomorrow instantly caught my eye. The mix of online asynchronous gameplay and having to deal with other people’s decisions in a dying and flooded world sounds interesting. Tides of Tomorrow arrives in February 2026.Astro Bot is getting even more levels! Astro Bot - Challenge DLC Trailer | State of Play 2025One of the best PS5 games ever made, Astro Bot, is getting even more free levels later this month. Five new levels will soon be added to Astro Bot. And Sony is bringing back the fan-favorite Astro Bot PS5 controller that sold out instantly last year.Sea of Remnants is another colorful pirate gameSea of Remnants - Announce Trailer | PS5 GamesWait a minute, another colorful and fun-looking ocean-based adventure game? That’s strange. But anyway, this third-person pirate game has a cool style and big bosses to fight. and ship gameplay. And mermaids. It arrives on PS5 in 2026. Sword of the Sea is coming to PS PlusSword of the Sea - Launch Date Announcement | PS5 GamesThe people behind Abzu and Pathless have a new game coming out, and it looks like you’ll be doing a lot of sick sword surfing. This launches on PS Plus on August 19. More games coming to PS Plus Classic CatalogDeus Ex’s PS2 port - Jun 17Twisted Metal 3 and Twisted Metal 4 - July 15Resident Evil 2 and Resident Evil 3 - Later this summer007 First Light reveals its new Bond“Bond is a bullet without a target. Let’s give him one.” This new trailer looks exciting as heck, but I’m really bummed by how boring the new Bond himself looks. This is out in 2026. here. Ghost of Yotei will get its own digital eventSony is promising a big gamepaly deep dive for Sucker Punch’s upcoming samurai game. Expect to learn more in July. MARVEL Tōkon: Fighting Souls | Announce TrailerWell, this is a wonderful surprise. Sony, Marvel, and Arc System Worksare teaming up to make a 2.5D Marvel Comics tag-team fighting game. And it looks sick as hell. Marvel Tōkon: Fighting Souls is coming out in 2026 on PS5 and PC. Here’s a separate video Sony posted about how the game came about and the work going into it: MARVEL Tōkon: Fighting Souls | From Japan to the World
    #everything #saw #during #sony039s #big
    Everything We Saw During Sony's Big State Of Play
    Today, Sony held its latest State of Play and while it didn’t include some of the bigger games that people were likely hoping for, like a new God of War spin-off, it did include some cool surprises, including what might be the best-looking fighting game I’ve seen in years and a remake of a beloved Final Fantasy title. Suggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism Share SubtitlesOffEnglishview videoSuggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism Share SubtitlesOffEnglishHere’s the full State of Play if you want to watch it all: State of Play | June 4, 2025Lumines Arise Lumines Arise - Announce Trailer | PS5 GamesThis fun announcement kicked off the 45-minute show. It’s from the devs behind Tetris Effect, and it looksawesome. This one is coming to PS5 and PSVR2 in Fall 2025.Pragmata delayed againPragmata - First Contact Trailer | PS5 GamesCapcom had a new gameplay trailer for its long-awaited space-adventure-action-horror game featuring a young robot girl and a cool astronaut dude fighting robots. The game was set to come out in 2022, got delayed until 2023, and then got delayed again. Now it’s set to arrive in 2026. Romeo Is a Deadman looks wildRomeo is a Dead Man - Announce Trailer | PS5 GamesZombies attack, nearly killing Romeo, but luckily his grandfather shoves a big sci-fi thing into his head and turns him into a sword-wielding half-dead undead killer. Okay, I’m interested. This one is “maybe” arriving in 2026. No shock that Suda51 and Grasshopper Manufacture are behind this wild-looking game. Silent Hill F arrives in SeptemberSilent Hill f - Release Date Trailer | PS5 GamesI’m very happy that we’ve entered a new and better era for the Silent Hill franchise after so, so many years of crap and lukewarm slop. This latest entry looks very different than past games, but still seems properly foggy, spooky, and creepy. And that’s all I need in a Silent Hill sequel. Don’t have to wait long for this one as it launches September 25, 2025.Bloodstained: The Scarlett Engagement is revealedBloodstained: The Scarlet Engagement - Announce Trailer | PS5 GamesSurprise! We are getting a new Bloodstained game. What started back in 2015 as a crowdfunded attempt to make a new Castlevania-inspired 2D action game has expanded into a large franchise of its own. And it’s about to get bigger when this new entry arrives sometime in the future on PS5. Digimon Story Time StrangerDigimon Story Time Stranger - Release Date Trailer | PS5 GamesWow, it’s wild how much better Digimon games look than Pokémon games. This latest time-traveling RPG set in the universe features some slick visuals and a lot of people screaming about the digital world. As someone living in 2025, I’m fine with the digital world being burned to the ground. Time Stranger is out October 3, 2025. Final Fantasy Tactics is finally getting a remakeFinal Fantasy Tactics - The Ivalice Chronicles - Announcement Trailer | PS5 GamesYou can read more about this in our story here, but suffice it to say that the legendary tactics RPG is, at long last, getting the remake fans have been clamoring for for ages. Sony devoted a few minutes to sharing trailers and a few bits of info on these four games, one of which is launching on PS4. Wild. Baby Steps - Release Date Announcement | PS5 GamesHirogami - Pre-order Trailer | PS5 GamesCairn - Release Date and Demo | PS5 GamesNinja Gaiden: Ragebound - Release Date Announcement | PS5 & PS4 GamesMortal Kombat Legacy Collection looks amazingMortal Kombat: Legacy Kollection - Announce Trailer | PS5 & PS4 GamesDigital Eclipse, the retro game wizards behind some excellent old-school video game collections, is taking on the Mortal Kombat series in 2025. Legacy Collection will contain a ton of games from the series, including handheld, console, and arcade ports, as well as rollback netcode and documentary content. I’m very excited to see more of this thing.Sony is making a fight stickProject Defiant Wireless Fight Stick - Teaser TrailerNot much info beyond that. Here’s Sony’s official description of the device:Join the fray at home or away with the Project Defiant wireless fight stick*. Take the fight to your opponents with the included sling carry case, and enjoy precise in-game response with ultra-low latency wireless and wired play options, along with a durable, ergonomic design that’s built for battle.New Metal Gear Solid Delta Snake Eater trailerMetal Gear Solid Δ: Snake Eater - Gameplay Trailer | PS5 GamesThe Ape Escape mode is back and looks bigger than before. But the big news is at the very end, where Konami is teasing what looks like a multiplayer mode. Is Metal Gear Online returning? I hope so. Nioh 3 is coming in 2026, a demo is out nowNioh 3 - Announcement Trailer | PS5 GamesTeam Ninja is back with Nioh 3. The game will let players swap between a samurai and a ninja on the fly during combat, and looks slick as heck. This one drops in 2026, but you can check out a demo for it today on PS5. Thief VR spin-off coming to PSVR2 this yearThief VR: Legacy of Shadow - Reveal Trailer | PS VR2 GamesWhen I saw this trailer, I thought, “Hey, this looks like a new Thief game.” And I was right. Thief VR Legacy of Shadow arrives in 2025 and is being co-developed by Eidos-Montréal and Vertigo Games. Tides of Tomorrow looks like a colorful Waterworld gameTides of Tomorrow - Release Date Trailer | PS5 GamesI love colorful-looking games, so Tides of Tomorrow instantly caught my eye. The mix of online asynchronous gameplay and having to deal with other people’s decisions in a dying and flooded world sounds interesting. Tides of Tomorrow arrives in February 2026.Astro Bot is getting even more levels! Astro Bot - Challenge DLC Trailer | State of Play 2025One of the best PS5 games ever made, Astro Bot, is getting even more free levels later this month. Five new levels will soon be added to Astro Bot. And Sony is bringing back the fan-favorite Astro Bot PS5 controller that sold out instantly last year.Sea of Remnants is another colorful pirate gameSea of Remnants - Announce Trailer | PS5 GamesWait a minute, another colorful and fun-looking ocean-based adventure game? That’s strange. But anyway, this third-person pirate game has a cool style and big bosses to fight. and ship gameplay. And mermaids. It arrives on PS5 in 2026. Sword of the Sea is coming to PS PlusSword of the Sea - Launch Date Announcement | PS5 GamesThe people behind Abzu and Pathless have a new game coming out, and it looks like you’ll be doing a lot of sick sword surfing. This launches on PS Plus on August 19. More games coming to PS Plus Classic CatalogDeus Ex’s PS2 port - Jun 17Twisted Metal 3 and Twisted Metal 4 - July 15Resident Evil 2 and Resident Evil 3 - Later this summer007 First Light reveals its new Bond“Bond is a bullet without a target. Let’s give him one.” This new trailer looks exciting as heck, but I’m really bummed by how boring the new Bond himself looks. This is out in 2026. here. Ghost of Yotei will get its own digital eventSony is promising a big gamepaly deep dive for Sucker Punch’s upcoming samurai game. Expect to learn more in July. MARVEL Tōkon: Fighting Souls | Announce TrailerWell, this is a wonderful surprise. Sony, Marvel, and Arc System Worksare teaming up to make a 2.5D Marvel Comics tag-team fighting game. And it looks sick as hell. Marvel Tōkon: Fighting Souls is coming out in 2026 on PS5 and PC. Here’s a separate video Sony posted about how the game came about and the work going into it: MARVEL Tōkon: Fighting Souls | From Japan to the World #everything #saw #during #sony039s #big
    KOTAKU.COM
    Everything We Saw During Sony's Big State Of Play
    Today, Sony held its latest State of Play and while it didn’t include some of the bigger games that people were likely hoping for, like a new God of War spin-off, it did include some cool surprises, including what might be the best-looking fighting game I’ve seen in years and a remake of a beloved Final Fantasy title. Suggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism Share SubtitlesOffEnglishview videoSuggested ReadingTears Of The Kingdom's Newspaper Questline And The State Of Hyrulean Journalism Share SubtitlesOffEnglishHere’s the full State of Play if you want to watch it all: State of Play | June 4, 2025 [English]Lumines Arise Lumines Arise - Announce Trailer | PS5 GamesThis fun announcement kicked off the 45-minute show. It’s from the devs behind Tetris Effect, and it looks (and sounds) awesome. This one is coming to PS5 and PSVR2 in Fall 2025.Pragmata delayed againPragmata - First Contact Trailer | PS5 GamesCapcom had a new gameplay trailer for its long-awaited space-adventure-action-horror game featuring a young robot girl and a cool astronaut dude fighting robots. The game was set to come out in 2022, got delayed until 2023, and then got delayed again. Now it’s set to arrive in 2026. Romeo Is a Deadman looks wildRomeo is a Dead Man - Announce Trailer | PS5 GamesZombies attack, nearly killing Romeo, but luckily his grandfather shoves a big sci-fi thing into his head and turns him into a sword-wielding half-dead undead killer. Okay, I’m interested. This one is “maybe” arriving in 2026. No shock that Suda51 and Grasshopper Manufacture are behind this wild-looking game. Silent Hill F arrives in SeptemberSilent Hill f - Release Date Trailer | PS5 GamesI’m very happy that we’ve entered a new and better era for the Silent Hill franchise after so, so many years of crap and lukewarm slop. This latest entry looks very different than past games, but still seems properly foggy, spooky, and creepy. And that’s all I need in a Silent Hill sequel. Don’t have to wait long for this one as it launches September 25, 2025.Bloodstained: The Scarlett Engagement is revealedBloodstained: The Scarlet Engagement - Announce Trailer | PS5 GamesSurprise! We are getting a new Bloodstained game. What started back in 2015 as a crowdfunded attempt to make a new Castlevania-inspired 2D action game has expanded into a large franchise of its own. And it’s about to get bigger when this new entry arrives sometime in the future on PS5. Digimon Story Time StrangerDigimon Story Time Stranger - Release Date Trailer | PS5 GamesWow, it’s wild how much better Digimon games look than Pokémon games. This latest time-traveling RPG set in the universe features some slick visuals and a lot of people screaming about the digital world. As someone living in 2025, I’m fine with the digital world being burned to the ground. Time Stranger is out October 3, 2025. Final Fantasy Tactics is finally getting a remakeFinal Fantasy Tactics - The Ivalice Chronicles - Announcement Trailer | PS5 GamesYou can read more about this in our story here, but suffice it to say that the legendary tactics RPG is, at long last, getting the remake fans have been clamoring for for ages. Sony devoted a few minutes to sharing trailers and a few bits of info on these four games, one of which is launching on PS4. Wild. Baby Steps - Release Date Announcement | PS5 GamesHirogami - Pre-order Trailer | PS5 GamesCairn - Release Date and Demo | PS5 GamesNinja Gaiden: Ragebound - Release Date Announcement | PS5 & PS4 GamesMortal Kombat Legacy Collection looks amazingMortal Kombat: Legacy Kollection - Announce Trailer | PS5 & PS4 GamesDigital Eclipse, the retro game wizards behind some excellent old-school video game collections, is taking on the Mortal Kombat series in 2025. Legacy Collection will contain a ton of games from the series, including handheld, console, and arcade ports, as well as rollback netcode and documentary content. I’m very excited to see more of this thing.Sony is making a fight stickProject Defiant Wireless Fight Stick - Teaser TrailerNot much info beyond that. Here’s Sony’s official description of the device:Join the fray at home or away with the Project Defiant wireless fight stick*. Take the fight to your opponents with the included sling carry case, and enjoy precise in-game response with ultra-low latency wireless and wired play options, along with a durable, ergonomic design that’s built for battle.New Metal Gear Solid Delta Snake Eater trailerMetal Gear Solid Δ: Snake Eater - Gameplay Trailer | PS5 GamesThe Ape Escape mode is back and looks bigger than before. But the big news is at the very end, where Konami is teasing what looks like a multiplayer mode. Is Metal Gear Online returning? I hope so. Nioh 3 is coming in 2026, a demo is out nowNioh 3 - Announcement Trailer | PS5 GamesTeam Ninja is back with Nioh 3. The game will let players swap between a samurai and a ninja on the fly during combat, and looks slick as heck. This one drops in 2026, but you can check out a demo for it today on PS5. Thief VR spin-off coming to PSVR2 this yearThief VR: Legacy of Shadow - Reveal Trailer | PS VR2 GamesWhen I saw this trailer, I thought, “Hey, this looks like a new Thief game.” And I was right. Thief VR Legacy of Shadow arrives in 2025 and is being co-developed by Eidos-Montréal and Vertigo Games. Tides of Tomorrow looks like a colorful Waterworld gameTides of Tomorrow - Release Date Trailer | PS5 GamesI love colorful-looking games, so Tides of Tomorrow instantly caught my eye. The mix of online asynchronous gameplay and having to deal with other people’s decisions in a dying and flooded world sounds interesting. Tides of Tomorrow arrives in February 2026.Astro Bot is getting even more levels! Astro Bot - Challenge DLC Trailer | State of Play 2025One of the best PS5 games ever made, Astro Bot, is getting even more free levels later this month. Five new levels will soon be added to Astro Bot. And Sony is bringing back the fan-favorite Astro Bot PS5 controller that sold out instantly last year.Sea of Remnants is another colorful pirate gameSea of Remnants - Announce Trailer | PS5 GamesWait a minute, another colorful and fun-looking ocean-based adventure game? That’s strange. But anyway, this third-person pirate game has a cool style and big bosses to fight. and ship gameplay. And mermaids. It arrives on PS5 in 2026. Sword of the Sea is coming to PS PlusSword of the Sea - Launch Date Announcement | PS5 GamesThe people behind Abzu and Pathless have a new game coming out, and it looks like you’ll be doing a lot of sick sword surfing. This launches on PS Plus on August 19. More games coming to PS Plus Classic CatalogDeus Ex’s PS2 port - Jun 17Twisted Metal 3 and Twisted Metal 4 - July 15Resident Evil 2 and Resident Evil 3 - Later this summer007 First Light reveals its new Bond“Bond is a bullet without a target. Let’s give him one.” This new trailer looks exciting as heck, but I’m really bummed by how boring the new Bond himself looks. This is out in 2026. Read more here. Ghost of Yotei will get its own digital eventSony is promising a big gamepaly deep dive for Sucker Punch’s upcoming samurai game. Expect to learn more in July. MARVEL Tōkon: Fighting Souls | Announce TrailerWell, this is a wonderful surprise. Sony, Marvel, and Arc System Works (Guilty Gear, BlazBlue, Dragon Ball FighterZ) are teaming up to make a 2.5D Marvel Comics tag-team fighting game. And it looks sick as hell. Marvel Tōkon: Fighting Souls is coming out in 2026 on PS5 and PC. Here’s a separate video Sony posted about how the game came about and the work going into it: MARVEL Tōkon: Fighting Souls | From Japan to the World
    Like
    Love
    Wow
    Sad
    Angry
    184
    0 Σχόλια 0 Μοιράστηκε
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Σχόλια 0 Μοιράστηκε
  • The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Σχόλια 0 Μοιράστηκε
  • PlayStation State of Play June 2025: Everything announced

    PlayStation State of Play June 2025: Everything announced
    Marvel at a night of sick reveals from Sony.

    Image credit: Eurogamer

    News

    by Connor Makar
    Contributor

    Published on June 4, 2025

    Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists!
    Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy!

    Lumines Arise
    The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now.

    Watch the new Lumines Arise trailer here!Watch on YouTube
    Pragmata
    Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5.

    Watch the Pragmata trailer here!Watch on YouTube
    Romeo is a Dead Man
    Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe.

    Watch the Romeo is a Dead Man trailer here!Watch on YouTube
    Silent Hill f
    Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5.

    Watch the Silent Hill f trailer here!Watch on YouTube
    Bloodstained: The Scarlet Engagement
    We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5.

    Check out the Bloodstained trailer here!Watch on YouTube
    Digimon Story Time Stranger
    Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025.

    Love me some Digimon. Watch the trailer here!Watch on YouTube
    Final Fantasy Tactics - The Ivalice Chronicles
    Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September.

    FF Tactics is back!Watch on YouTube
    Baby Steps

    Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September.

    This is a must-watch for those who like a chuckle.Watch on YouTube
    Hirogami
    Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September.

    A neat reveal! Check out the new trailerWatch on YouTube
    Everybody's Golf Hot Shots
    To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September.

    Check out some golf right here!Watch on YouTube
    Ninja Gaiden Ragebound
    Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4.

    A big win for lovers of the classics. Watch the trailer here!Watch on YouTube
    Cairn
    The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today!

    A moving trailer for you to watch right here!Watch on YouTube
    Mortal Kombat Legacy Kollection
    Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025.

    Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube
    Metal Gear Solid Delta Snake Eater
    Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine.

    This isn't a dream - it's Snake Eater!Watch on YouTube
    Nioh 3
    A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try!

    If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube
    Thief VR: Legacy of Shadow
    Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025.

    Some VR rep at the State of Play!Watch on YouTube
    Tides of Tomorrow
    Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5.

    Loving the look of this trailer, give it a watch!Watch on YouTube
    Astro Bot new update
    An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist!

    Here's a look at what's coming to Astro Bot!Watch on YouTube
    Sea of Remnants
    Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now.

    Grab your hat annd cutlass sailor.Watch on YouTube
    Sword of the Sea
    Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus.

    Now this is my kind of vibe, watch it and find out why!Watch on YouTube
    FBC Firebreak
    Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025.

    Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube
    New PS Plus games coming this summer
    A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes:

    The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025.
    Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025.
    Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer.
    Myst and Riven coming later this month as part of Days of Play

    First Light 007
    A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5.

    It's Bond, James Bond... TrailerWatch on YouTube
    Ghost of Yotei
    A short one here. Ghost of Yotei is getting a gameplay deep dive in July.

    Stay tuned for more info!Watch on YouTube
    Marvel Tokon Fighting Souls
    Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026.

    It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube
    #playstation #state #play #june #everything
    PlayStation State of Play June 2025: Everything announced
    PlayStation State of Play June 2025: Everything announced Marvel at a night of sick reveals from Sony. Image credit: Eurogamer News by Connor Makar Contributor Published on June 4, 2025 Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists! Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy! Lumines Arise The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now. Watch the new Lumines Arise trailer here!Watch on YouTube Pragmata Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5. Watch the Pragmata trailer here!Watch on YouTube Romeo is a Dead Man Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe. Watch the Romeo is a Dead Man trailer here!Watch on YouTube Silent Hill f Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5. Watch the Silent Hill f trailer here!Watch on YouTube Bloodstained: The Scarlet Engagement We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5. Check out the Bloodstained trailer here!Watch on YouTube Digimon Story Time Stranger Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025. Love me some Digimon. Watch the trailer here!Watch on YouTube Final Fantasy Tactics - The Ivalice Chronicles Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September. FF Tactics is back!Watch on YouTube Baby Steps Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September. This is a must-watch for those who like a chuckle.Watch on YouTube Hirogami Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September. A neat reveal! Check out the new trailerWatch on YouTube Everybody's Golf Hot Shots To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September. Check out some golf right here!Watch on YouTube Ninja Gaiden Ragebound Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4. A big win for lovers of the classics. Watch the trailer here!Watch on YouTube Cairn The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today! A moving trailer for you to watch right here!Watch on YouTube Mortal Kombat Legacy Kollection Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025. Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube Metal Gear Solid Delta Snake Eater Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine. This isn't a dream - it's Snake Eater!Watch on YouTube Nioh 3 A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try! If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube Thief VR: Legacy of Shadow Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025. Some VR rep at the State of Play!Watch on YouTube Tides of Tomorrow Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5. Loving the look of this trailer, give it a watch!Watch on YouTube Astro Bot new update An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist! Here's a look at what's coming to Astro Bot!Watch on YouTube Sea of Remnants Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now. Grab your hat annd cutlass sailor.Watch on YouTube Sword of the Sea Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus. Now this is my kind of vibe, watch it and find out why!Watch on YouTube FBC Firebreak Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025. Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube New PS Plus games coming this summer A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes: The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025. Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025. Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer. Myst and Riven coming later this month as part of Days of Play First Light 007 A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5. It's Bond, James Bond... TrailerWatch on YouTube Ghost of Yotei A short one here. Ghost of Yotei is getting a gameplay deep dive in July. Stay tuned for more info!Watch on YouTube Marvel Tokon Fighting Souls Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026. It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube #playstation #state #play #june #everything
    WWW.EUROGAMER.NET
    PlayStation State of Play June 2025: Everything announced
    PlayStation State of Play June 2025: Everything announced Marvel at a night of sick reveals from Sony. Image credit: Eurogamer News by Connor Makar Contributor Published on June 4, 2025 Another PlayStation State of Play took place tonight, giving as a peak at what's to come on the PS5 and PS5 Pro. The event was stacked with new trailers, including some reveals of entirely new games we've not seen before. Sony even remembered the PS VR2 exists! Whether you're here to double check what you've just seen, or want a catch-up on all the reveals you missed, this article will take you through everything shown off at the June State of Play event. Enjoy! Lumines Arise The show kicked off with a reveal trailer for Lumines Arive, a colourful and very musical way to start the event. It comes from the developers behind Tetris Effect, which was a banger. It's coming this Fall, and you can wishlist in now. Watch the new Lumines Arise trailer here!Watch on YouTube Pragmata Next up is Pragmata, the mysterious Capcom game that we've seen precious little of since its first reveal years ago. This time we got gameplay, loads of third person action goodness and a lovely sci-fi setting. Also, Diana is adorable. The game is set to come out at some point in 2026 to the PS5. Watch the Pragmata trailer here!Watch on YouTube Romeo is a Dead Man Next up is a grisly and cartoonish action game called Romeo is a Dead Man. It's coming from illustrious and brilliantly weird developer Grasshopper Manufacture, with both Suda 51 and Ren Yamazaki working on it. It'll be coming out in 2026... Maybe. Watch the Romeo is a Dead Man trailer here!Watch on YouTube Silent Hill f Now for something totally different with Silent Hill f, a third-person horror title that's certainly ramped the creepy factor up quite significantly. In it we see plenty of horrific mannequins with bloody knives in hand. It's set to come out on the 25th September, 2025 on the PS5. Watch the Silent Hill f trailer here!Watch on YouTube Bloodstained: The Scarlet Engagement We now go to a lush looking 2D side scroller called Bloodstained: The Scarlet Engagement, which has just been revealed. A sequel to a beloved indie and spiritual successor to classic Castlevania games, it'll be coming out in 2026 on the PS5. Check out the Bloodstained trailer here!Watch on YouTube Digimon Story Time Stranger Up next a gameplay trailer for Digimon Story Time Stranger, which looks absolutely killer. In it we see a variety of fan-favourite Digimon and a variety of characters seemingly in the thick of some nefarious medling between the physical and digital worlds. It'll be coming on the 3rd November 2025. Love me some Digimon. Watch the trailer here!Watch on YouTube Final Fantasy Tactics - The Ivalice Chronicles Now for a blast from the past with Final Fantasy Tactics: The Ivalice Chronicles. It'll include two versions of the original game, a classic version which is a faithful recreation, and an enhanced version with improved graphics and more. It'll be coming to the PS5 and PS4 on the 30th September. FF Tactics is back!Watch on YouTube Baby Steps Next up is Baby Steps, which is a fun exploration game where you've gotta control your leg movements as you climb a mountain. Now we finally have a release date! It comes to PS5 on the 8th September. This is a must-watch for those who like a chuckle.Watch on YouTube Hirogami Now for something whimsical. Hirogami allows you to transform into a variety of creatures through the power of folding. It's coming to PS5 on the 3rd September. A neat reveal! Check out the new trailerWatch on YouTube Everybody's Golf Hot Shots To the green we go with Everybody's Golf Hot Shots, with courses in 10 different regions around the world, each with weather effects and night time variants. Those who pre-order get Pacman - rad! It comes to the PS5 on the 5th September. Check out some golf right here!Watch on YouTube Ninja Gaiden Ragebound Here's a retro throwback. Ninja Gaiden Ragebound got a new gameplay trailer, and is coming on the 21st July on PS5 and PS4. A big win for lovers of the classics. Watch the trailer here!Watch on YouTube Cairn The Game Bakers are back at it again with Cairn. A new gameplay trailer just dropped, showing a perilous and scenic climb up a massive mountain. Like all Game Bakers titles, the trailer has a rad music track. It's coming to PS5 on the 5th November 2025, but you can download a demo today! A moving trailer for you to watch right here!Watch on YouTube Mortal Kombat Legacy Kollection Get over here! A reveal trailer for the Mortal Kombat Legacy Kollection just dropped, and with it a blend of retro titles you'll be able to play with this new collection as well as some lovely retro arcade footage. It contains Mortal Kombat, Mortal Kombat 2, Mortal Kombat 3, Ultimate Mortal Kombat 3, Mortal Kombat 4, and more. It's coming to the PS5 and PS4 in 2025. Moooooortal Kooooooombat Traaaaaaailer.Watch on YouTube Metal Gear Solid Delta Snake Eater Here's a big one! We got a new gameplay trailer for Metal Gear Solid Delta Snake Eater. Loads of memorable locations, gadgets, and moments on display in the stunning new engine. This isn't a dream - it's Snake Eater!Watch on YouTube Nioh 3 A rad new reveal comes via Nioh 3, which got a fantastic new gameplay trailer at the State of Play. It releases in early 2026 on the PS5. A demo is available right now too, so give it a try! If you love some challenging and bloody action, you should watch this Nioh 3 trailer!Watch on YouTube Thief VR: Legacy of Shadow Finally some PSVR love! A new reveal trailer for Thief VR: Legacy of Shadow was shown off, giving us a glimpse of loads of stealth, robbery, and action. It's coming to VR in 2025. Some VR rep at the State of Play!Watch on YouTube Tides of Tomorrow Another new game got a trailer, this time Tides of Tomorrow. A very bright first-person action game set in a flooded dystopian world. It's coming 24th February, 2026 on the PS5. Loving the look of this trailer, give it a watch!Watch on YouTube Astro Bot new update An update to Astro Bot is up next, containing five new challenge levels, new guest bots, and an announcement that the Astro Bot DualSense controller is coming back this year... With a twist! Here's a look at what's coming to Astro Bot!Watch on YouTube Sea of Remnants Pirate time! A reveal trailer for Sea of Remnants was just shown off, giving us a peak of sailing, navigating various islands, and facing the mythical creatures of the deep. It'll be coming in 2026, you can wishlist it now. Grab your hat annd cutlass sailor.Watch on YouTube Sword of the Sea Here comes something beautiful. Sword of the Sea got a new trailer, with plenty of fantastic locations presented in vibrant tones throughout. It's coming on 19th August to the PS5, available on PlayStation Plus. Now this is my kind of vibe, watch it and find out why!Watch on YouTube FBC Firebreak Love co-op? FBC Firebreak is a spin-off to Remedy's Control series, and offers plenty of PvE action for those with a taste for the paranormal. It's coming to PS5 and the PlayStation Game Catalogue on 17th June, 2025. Here's the portion of the State of Play featuring FBC: Firebreak!Watch on YouTube New PS Plus games coming this summer A bunch of new games are coming to PlayStation Plus this Summer in a variety of forms. This includes: The original Deux Ex coming to PlayStation Plus Classics Catalogue on 17th June, 2025. Twisted Metal 3 & 4 coming to PlayStation Plus Classics Catalogue on 15th July, 2025. Resident Evil 2 & 3 coming to PlayStation Plus Classics Catalogue this Summer. Myst and Riven coming later this month as part of Days of Play First Light 007 A massive reveal for the show! James Bond took the stage with First Light 007, our first look at the game. It kicks off with some introductory cinematics, but there are snippets of gameplay showing loads of spy action. It's coming in 2026 to the PS5. It's Bond, James Bond... TrailerWatch on YouTube Ghost of Yotei A short one here. Ghost of Yotei is getting a gameplay deep dive in July. Stay tuned for more info!Watch on YouTube Marvel Tokon Fighting Souls Trying not to freak out over here. Arc System Works just revealed Marvel Tokon Fighting Souls. A 3v3 fighting game featuring plenty of beloved Marvel characters. It's coming to PS5 and PC in 2026. It's 11PM at night and I'm trying not to scream. Mahvel Baby!Watch on YouTube
    Like
    Love
    Wow
    Sad
    Angry
    308
    0 Σχόλια 0 Μοιράστηκε
  • VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV

    By JENNIFER CHAMPAGNE

    House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship.

    Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters.

    The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.”

    The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity.

    Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin.

    The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.”

    Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.”

    The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic.

    Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements.

    Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status.

    Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger.

    For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them.

    Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.”

    While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.”

    “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon

    The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.”

    On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    #vfx #emmy #contenders #setting #benchmark
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.” #vfx #emmy #contenders #setting #benchmark
    WWW.VFXVOICE.COM
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare. (Image courtesy of HBO) The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis.  (Image courtesy of HBO) For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise. (Image courtesy of Apple TV+) One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement. (Image courtesy of Prime Video) The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.(Images courtesy of HBO) The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism. (Image courtesy of Prime Video) Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging. (Photo: Erin Simkin. Courtesy of Netflix) Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding and [Lila’s] Agony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets. (Image courtesy of HBO) American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier. (Photo: Justin Lubin. Courtesy of Netflix) For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision. (Image courtesy of Prime Video) Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building. (Image courtesy of Apple TV+) For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857. (Image courtesy of Netflix) The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery. (Photo: Jessica Brooks. Courtesy of Netflix) Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed by [Production Designer] Jim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity. (Image courtesy of Prime Video) In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets. (Images courtesy of HBO) The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. (Image courtesy of HBO) Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date. (Photo: Theo Whiteman. Courtesy of HBO) For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    Like
    Love
    Wow
    Sad
    Angry
    149
    0 Σχόλια 0 Μοιράστηκε
  • What professionals really think about “Vibe Coding”

    Many don’t like it, buteverybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding”How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding”, but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty. Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied. Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”. Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”. Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is usedwhere it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excitedand acceptingof vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job. “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.” “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users. “Critical thinking is still a human job… but vibe coding helps with fast results.”“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.“It’s about playful, relaxed creation — for the love of making something.”Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #what #professionals #really #think #about
    What professionals really think about “Vibe Coding”
    Many don’t like it, buteverybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding”How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding”, but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty. Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied.😍 Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”.😑 Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”.🤮 Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is usedwhere it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excitedand acceptingof vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job.😍 “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.”🤮 “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users.😑 “Critical thinking is still a human job… but vibe coding helps with fast results.”🤮“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.😍“It’s about playful, relaxed creation — for the love of making something.”🤮Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #what #professionals #really #think #about
    UXDESIGN.CC
    What professionals really think about “Vibe Coding”
    Many don’t like it, but (almost) everybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding” (with direct quotes)How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding” (47% for developers Vs 37% for creatives), but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty (thanks everybody). Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied.😍 Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”.😑 Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”.🤮 Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is used (especially on social media) where it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excited (positive) and accepting (neutral) of vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job.😍 “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.”🤮 “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users.😑 “Critical thinking is still a human job… but vibe coding helps with fast results.”🤮“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.😍“It’s about playful, relaxed creation — for the love of making something.”🤮Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια 0 Μοιράστηκε
  • A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec

    View of the south façade before construction of a new residential project that now conceals Le Christin from Boulevard René Lévesque.
    PROJECT Le Christin, Montreal, Quebec
    ARCHITECT Atelier Big City
    PHOTOS James Brittain
     
    PROJECT Les Studios du PAS, Montreal, Quebec
    ARCHITECT L. McComber in collaboration with Inform 
    PHOTOS Ulysse Lemerise
     
    Nighttime, April 15, 2025. A thousand volunteers are gathering in Montreal, part of a province-wide effort to try and put numbers on a growing phenomenon in cities like Vancouver, Calgary, Toronto, and many others. The volunteers are getting ready to walk around targeted areas in downtown Montreal and around certain subway stations. Temporary shelters are also visited.
    First conducted in the spring of 2018, this survey showed that 3,149 people were in a vulnerable situation at the time. Four years later, a similar effort revealed that Montreal’s homeless population had risen to 4,690 people—and that there were some 10,000 people experiencing homelessness in the whole of the province. The 2025 numbers are expected to be significantly higher. For the organizers, this one-night snapshot of the situation is “neither perfect nor complete.” However, for nonprofit organizations and governmental bodies eager to prevent a vulnerable population from ending up on the streets, the informal census does provide highly valuable information. 
    Two recent initiatives—very different from one another—offer inspiring answers. The most recent one, Le Christin, was designed by Atelier Big Cityand inaugurated in 2024. Studios du PAS, on the other hand, was designed by Montreal firm L. McComber, and welcomed its first tenants in 2022. Both projects involved long-standing charities: the 148-year-old Accueil Bonneau, in the case of Le Christin, and the 136-year-old Mission Old Brewery for Studios du PAS. Le Christin was spearheaded, and mostly financed, by the Société d’habitation et de développement de Montréal, a non-profit, para-municipal corporation created in 1988. Studios du PAS was first selected by the City of Montreal to be built thanks to the Rapid Housing Initiativeprogram run by the Canada Mortgage and Housing Corporation. Le Christin also received a financial contribution from the CMHC towards the end of the process.
    Boldly coloured blind walls signal the presence of Le Christin in the center of a densely occupied city block, with entrance to the left along Sanguinet Stree.
    Le Christin
    Although sited in a very central location, near the buzzing St. Catherine and St. Denis streets, Le Christin is hard to find. And even when one suddenly spots two seven-storey-high walls, coloured lemon-zest yellow and mango orange, it’s difficult to figure out what they are about. A stroll along the tiny Christin Street finally reveals the front façade of this new facility, now home to some of Montreal’s most vulnerable citizens. 
    View of Le Christin’s modulated front façade. Galvanized steel panels at ground level add a soft touch while protecting the building from potential damage caused by snow plows.
    Le Christin is unique for a number of reasons. First among them is its highly unusual location—at the centre of a dense city block otherwise occupied by university buildings, office towers, and condo blocks. Until a few years ago, the site was home to the four-storey Appartements Le Riga. The Art Deco-style building had been built in 1914 by developer-architect Joseph-Arthur Godin, who was a pioneer in his own right: he was one of the first in Montreal to experiment with reinforced concrete structures, a novelty in the city at the time. A century later, Le Riga, by then the property of SHDM, was in serious need of repair. Plans had already been drafted for a complete renovation of the building when a thorough investigation revealed major structural problems. Tenants had to leave on short notice and were temporarily relocated; the building was eventually demolished in 2019. By that time, Atelier Big City had been mandated to design a contemporary building that would replace Le Riga and provide a “place of one’s own” to close to 150 tenants, formerly homeless or at risk of becoming so.   
    Le Christin – Site Plan and Ground Floor Plan
    The entire operation sparked controversy, particularly as Le Christin started to rise, showing no sign of nostalgia. The architects’ daring approach was difficult to fathom—particularly for those who believe social housing should keep a low profile. 
    The program, originally meant for a clientele of single men, gradually evolved to include women. In order to reflect societal trends, the architects were asked to design 24 slightly larger units located in the building’s east wing, separated from the rest of the units by secured doors. Thus, Le Christin is able to accommodate homeless couples or close friends, as well as students and immigrants in need.

    A tenants-only courtyard is inserted in the south façade.
    In order to provide the maximum number of units requested by SHDM, each of the 90 studios was reduced to 230 square feet—an adjustment from Atelier Big City’s initial, slightly more generous plans. In a clever move, an L-shaped kitchen hugs the corner of each unit, pushing out against the exterior wall. As a result, the window openings recede from the façade, creating a sense of intimacy for the tenants, who enjoy contact with the exterior through large windows protected by quiet Juliet balconies. Far from damaging the initial design, the added constraint of tightened units allowed the architects to modulate the building’s façades, creating an even stronger statement.
    On the unit levels, corridors include large openings along the south façade. Each floor is colour-coded to enliven the space; overhead, perforated metal plates conceal the mechanical systems. An extra floor was gained thanks to the decision to expose the various plumbing, electrical, and ventilation systems.
    Well-lit meeting rooms and common areas are found near Le Christin’s front entrance, along with offices for personnel, who are present on the premises 24 hours a day. Apart from a small terrace above the entrance, the main exterior space is a yard which literally cuts into the building’s back façade. This has a huge impact on the interiors at all levels: corridors are generously lit with sunlight, a concept market developers would be well advised to imitate. The adjacent exit stairs are also notable, with their careful detailing and the presence of glazed openings. 
    The fire stairs, which open onto the exterior yard at ground level, feature glazing that allows for ample natural light.
    Le Christin has achieved the lofty goal articulated by SHDM’s former director, architect Nancy Schoiry: “With this project, we wanted to innovate and demonstrate that it was possible to provide quality housing for those at risk of homelessness.”
    The low-slung Studios du PAS aligns with neighbourhood two-storey buildings.
    Studios du PAS
    In sharp contrast with Le Christin’s surroundings, the impression one gets approaching Studios du PAS, 14 kilometres east of downtown Montreal, is that of a small town. In this mostly low-scale neighbourhood, L. McComber architects adopted a respectful, subdued approach—blending in, rather than standing out. 
    The project uses a pared-down palette of terracotta tile, wood, and galvanized steel. The footbridge links the upper level to shared exterior spaces.
    The financing for this small building, planned for individuals aged 55 or older experiencing or at risk of homelessness, was tied to a highly demanding schedule. The project had to be designed, built, and occupied within 18 months: an “almost impossible” challenge, according to principal architect Laurent McComber. From the very start, prefabrication was favoured over more traditional construction methods. And even though substantial work had to be done on-site—including the installation of the roof, electrical and mechanical systems, as well as exterior and interior finishes—the partially prefabricated components did contribute to keeping costs under control and meeting the 18-month design-to-delivery deadline.
    Les Studios du PAS
    The building was divided into 20 identical modules, each fourteen feet wide—the maximum width allowable on the road. Half the modules were installed at ground level. One of these, positioned nearest the street entrance, serves as a community room directly connected to a small office for the use of a social worker, allowing staff to follow up regularly with tenants. Flooded with natural light, the double-height lobby provides a friendly and inclusive welcome.
    The ground level studios were designed so they could be adapted to accommodate accessibility needs.
    Some of the ground floor units were adapted to meet the needs of those with a physical disability; the other units were designed to be easily adaptable if needed. All studio apartments, slightly under 300 square feet, include a full bathroom, a minimal kitchen, and sizeable storage space hidden behind cabinet doors. Most of the apartments include a small exterior alcove, which provides an intimate outdoor space while creating a subtle rhythm along the front façade.
    Inside the studio units, storage cupboards for clothes and belongings were added as an extension of the kitchen wall.
    Conscious of the tradition of brick residential buildings in Montreal, yet wanting to explore alternate materials, the architects selected an earth-toned terracotta tile from Germany. The 299mm x 1500mm tiles are clipped to the façade, allowing for faster installation and easier maintenance. All units enjoy triple-glazed windows and particularly well insulated walls. A high-performance heat pump was installed to lower energy demand—and costs—for heating and cooling needs.
    Wood siding was used to soften the upper-level balconies, which provide protected outdoor spaces for residents.
     
    Pride and Dignity
    Le Christin and Les Studios du PAS have little in common—except, of course, their program. Architecturally speaking, each represents an interesting solution to the problem at hand. While Le Christin is a high-spirited, flamboyant statement, Studios du PAS is to be praised for its respectful attitude, and for the architects’ relentless search for interesting alternatives to traditional construction norms.
    Atelier Big City is one of few firms in Canada that has the guts—and the talent—to play with bold colours. Decades of experimentation, led up to Le Christin, which is perhaps their strongest building to date. Their judicious choices of colour, brick type, and materials transmit a message of pride and dignity.
    Both projects demonstrate enormous respect and generosity to their residents: they provide architecture that treats them not as an underclass, but as regular people, who need the stability of dignified housing to start rebuilding their lives.
    Odile Hénault is a contributing editor to Canadian Architect.
     
    Le Christin
    CLIENT Société d’habitation et de développement de Montréal| ARCHITECT TEAM Anne Cormier, Randy Cohen, Howard Davies, Fannie Yockell, Gabriel Tessier, Sébastien St-Laurent, Lisa Vo | STRUCTURAL DPHV | MECHANICAL/ELECTRICAL BPA | CIVIL Genexco | LIGHTING CS Design | AREA 4,115 m2 | Construction BUDGET M | COMPLETION November 2023
     
    Les Studios du PAS 
    CLIENT PAS de la rue | ARCHITECT TEAM L. McComber—Laurent McComber, Olivier Lord, Jérôme Lemieux, Josianne Ouellet-Daudelin, Laurent McComber. Inform—David Grenier, Élisabeth Provost, Amélie Tremblay, David Grenier | PROJECT MANAGEMENT Groupe CDH | STRUCTURAL Douglas Consultants | MECHANICAL/ELECTRICAL Martin Roy & associés | CIVIL Gravitaire | CONTRACTOR Gestion Étoc | AREA 1,035 m2 | BUDGET M | COMPLETION September 2022

    As appeared in the June 2025 issue of Canadian Architect magazine

    The post A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec appeared first on Canadian Architect.
    #place #call #home #christin #les
    A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec
    View of the south façade before construction of a new residential project that now conceals Le Christin from Boulevard René Lévesque. PROJECT Le Christin, Montreal, Quebec ARCHITECT Atelier Big City PHOTOS James Brittain   PROJECT Les Studios du PAS, Montreal, Quebec ARCHITECT L. McComber in collaboration with Inform  PHOTOS Ulysse Lemerise   Nighttime, April 15, 2025. A thousand volunteers are gathering in Montreal, part of a province-wide effort to try and put numbers on a growing phenomenon in cities like Vancouver, Calgary, Toronto, and many others. The volunteers are getting ready to walk around targeted areas in downtown Montreal and around certain subway stations. Temporary shelters are also visited. First conducted in the spring of 2018, this survey showed that 3,149 people were in a vulnerable situation at the time. Four years later, a similar effort revealed that Montreal’s homeless population had risen to 4,690 people—and that there were some 10,000 people experiencing homelessness in the whole of the province. The 2025 numbers are expected to be significantly higher. For the organizers, this one-night snapshot of the situation is “neither perfect nor complete.” However, for nonprofit organizations and governmental bodies eager to prevent a vulnerable population from ending up on the streets, the informal census does provide highly valuable information.  Two recent initiatives—very different from one another—offer inspiring answers. The most recent one, Le Christin, was designed by Atelier Big Cityand inaugurated in 2024. Studios du PAS, on the other hand, was designed by Montreal firm L. McComber, and welcomed its first tenants in 2022. Both projects involved long-standing charities: the 148-year-old Accueil Bonneau, in the case of Le Christin, and the 136-year-old Mission Old Brewery for Studios du PAS. Le Christin was spearheaded, and mostly financed, by the Société d’habitation et de développement de Montréal, a non-profit, para-municipal corporation created in 1988. Studios du PAS was first selected by the City of Montreal to be built thanks to the Rapid Housing Initiativeprogram run by the Canada Mortgage and Housing Corporation. Le Christin also received a financial contribution from the CMHC towards the end of the process. Boldly coloured blind walls signal the presence of Le Christin in the center of a densely occupied city block, with entrance to the left along Sanguinet Stree. Le Christin Although sited in a very central location, near the buzzing St. Catherine and St. Denis streets, Le Christin is hard to find. And even when one suddenly spots two seven-storey-high walls, coloured lemon-zest yellow and mango orange, it’s difficult to figure out what they are about. A stroll along the tiny Christin Street finally reveals the front façade of this new facility, now home to some of Montreal’s most vulnerable citizens.  View of Le Christin’s modulated front façade. Galvanized steel panels at ground level add a soft touch while protecting the building from potential damage caused by snow plows. Le Christin is unique for a number of reasons. First among them is its highly unusual location—at the centre of a dense city block otherwise occupied by university buildings, office towers, and condo blocks. Until a few years ago, the site was home to the four-storey Appartements Le Riga. The Art Deco-style building had been built in 1914 by developer-architect Joseph-Arthur Godin, who was a pioneer in his own right: he was one of the first in Montreal to experiment with reinforced concrete structures, a novelty in the city at the time. A century later, Le Riga, by then the property of SHDM, was in serious need of repair. Plans had already been drafted for a complete renovation of the building when a thorough investigation revealed major structural problems. Tenants had to leave on short notice and were temporarily relocated; the building was eventually demolished in 2019. By that time, Atelier Big City had been mandated to design a contemporary building that would replace Le Riga and provide a “place of one’s own” to close to 150 tenants, formerly homeless or at risk of becoming so.    Le Christin – Site Plan and Ground Floor Plan The entire operation sparked controversy, particularly as Le Christin started to rise, showing no sign of nostalgia. The architects’ daring approach was difficult to fathom—particularly for those who believe social housing should keep a low profile.  The program, originally meant for a clientele of single men, gradually evolved to include women. In order to reflect societal trends, the architects were asked to design 24 slightly larger units located in the building’s east wing, separated from the rest of the units by secured doors. Thus, Le Christin is able to accommodate homeless couples or close friends, as well as students and immigrants in need. A tenants-only courtyard is inserted in the south façade. In order to provide the maximum number of units requested by SHDM, each of the 90 studios was reduced to 230 square feet—an adjustment from Atelier Big City’s initial, slightly more generous plans. In a clever move, an L-shaped kitchen hugs the corner of each unit, pushing out against the exterior wall. As a result, the window openings recede from the façade, creating a sense of intimacy for the tenants, who enjoy contact with the exterior through large windows protected by quiet Juliet balconies. Far from damaging the initial design, the added constraint of tightened units allowed the architects to modulate the building’s façades, creating an even stronger statement. On the unit levels, corridors include large openings along the south façade. Each floor is colour-coded to enliven the space; overhead, perforated metal plates conceal the mechanical systems. An extra floor was gained thanks to the decision to expose the various plumbing, electrical, and ventilation systems. Well-lit meeting rooms and common areas are found near Le Christin’s front entrance, along with offices for personnel, who are present on the premises 24 hours a day. Apart from a small terrace above the entrance, the main exterior space is a yard which literally cuts into the building’s back façade. This has a huge impact on the interiors at all levels: corridors are generously lit with sunlight, a concept market developers would be well advised to imitate. The adjacent exit stairs are also notable, with their careful detailing and the presence of glazed openings.  The fire stairs, which open onto the exterior yard at ground level, feature glazing that allows for ample natural light. Le Christin has achieved the lofty goal articulated by SHDM’s former director, architect Nancy Schoiry: “With this project, we wanted to innovate and demonstrate that it was possible to provide quality housing for those at risk of homelessness.” The low-slung Studios du PAS aligns with neighbourhood two-storey buildings. Studios du PAS In sharp contrast with Le Christin’s surroundings, the impression one gets approaching Studios du PAS, 14 kilometres east of downtown Montreal, is that of a small town. In this mostly low-scale neighbourhood, L. McComber architects adopted a respectful, subdued approach—blending in, rather than standing out.  The project uses a pared-down palette of terracotta tile, wood, and galvanized steel. The footbridge links the upper level to shared exterior spaces. The financing for this small building, planned for individuals aged 55 or older experiencing or at risk of homelessness, was tied to a highly demanding schedule. The project had to be designed, built, and occupied within 18 months: an “almost impossible” challenge, according to principal architect Laurent McComber. From the very start, prefabrication was favoured over more traditional construction methods. And even though substantial work had to be done on-site—including the installation of the roof, electrical and mechanical systems, as well as exterior and interior finishes—the partially prefabricated components did contribute to keeping costs under control and meeting the 18-month design-to-delivery deadline. Les Studios du PAS The building was divided into 20 identical modules, each fourteen feet wide—the maximum width allowable on the road. Half the modules were installed at ground level. One of these, positioned nearest the street entrance, serves as a community room directly connected to a small office for the use of a social worker, allowing staff to follow up regularly with tenants. Flooded with natural light, the double-height lobby provides a friendly and inclusive welcome. The ground level studios were designed so they could be adapted to accommodate accessibility needs. Some of the ground floor units were adapted to meet the needs of those with a physical disability; the other units were designed to be easily adaptable if needed. All studio apartments, slightly under 300 square feet, include a full bathroom, a minimal kitchen, and sizeable storage space hidden behind cabinet doors. Most of the apartments include a small exterior alcove, which provides an intimate outdoor space while creating a subtle rhythm along the front façade. Inside the studio units, storage cupboards for clothes and belongings were added as an extension of the kitchen wall. Conscious of the tradition of brick residential buildings in Montreal, yet wanting to explore alternate materials, the architects selected an earth-toned terracotta tile from Germany. The 299mm x 1500mm tiles are clipped to the façade, allowing for faster installation and easier maintenance. All units enjoy triple-glazed windows and particularly well insulated walls. A high-performance heat pump was installed to lower energy demand—and costs—for heating and cooling needs. Wood siding was used to soften the upper-level balconies, which provide protected outdoor spaces for residents.   Pride and Dignity Le Christin and Les Studios du PAS have little in common—except, of course, their program. Architecturally speaking, each represents an interesting solution to the problem at hand. While Le Christin is a high-spirited, flamboyant statement, Studios du PAS is to be praised for its respectful attitude, and for the architects’ relentless search for interesting alternatives to traditional construction norms. Atelier Big City is one of few firms in Canada that has the guts—and the talent—to play with bold colours. Decades of experimentation, led up to Le Christin, which is perhaps their strongest building to date. Their judicious choices of colour, brick type, and materials transmit a message of pride and dignity. Both projects demonstrate enormous respect and generosity to their residents: they provide architecture that treats them not as an underclass, but as regular people, who need the stability of dignified housing to start rebuilding their lives. Odile Hénault is a contributing editor to Canadian Architect.   Le Christin CLIENT Société d’habitation et de développement de Montréal| ARCHITECT TEAM Anne Cormier, Randy Cohen, Howard Davies, Fannie Yockell, Gabriel Tessier, Sébastien St-Laurent, Lisa Vo | STRUCTURAL DPHV | MECHANICAL/ELECTRICAL BPA | CIVIL Genexco | LIGHTING CS Design | AREA 4,115 m2 | Construction BUDGET M | COMPLETION November 2023   Les Studios du PAS  CLIENT PAS de la rue | ARCHITECT TEAM L. McComber—Laurent McComber, Olivier Lord, Jérôme Lemieux, Josianne Ouellet-Daudelin, Laurent McComber. Inform—David Grenier, Élisabeth Provost, Amélie Tremblay, David Grenier | PROJECT MANAGEMENT Groupe CDH | STRUCTURAL Douglas Consultants | MECHANICAL/ELECTRICAL Martin Roy & associés | CIVIL Gravitaire | CONTRACTOR Gestion Étoc | AREA 1,035 m2 | BUDGET M | COMPLETION September 2022 As appeared in the June 2025 issue of Canadian Architect magazine The post A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec appeared first on Canadian Architect. #place #call #home #christin #les
    WWW.CANADIANARCHITECT.COM
    A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec
    View of the south façade before construction of a new residential project that now conceals Le Christin from Boulevard René Lévesque. PROJECT Le Christin, Montreal, Quebec ARCHITECT Atelier Big City PHOTOS James Brittain   PROJECT Les Studios du PAS, Montreal, Quebec ARCHITECT L. McComber in collaboration with Inform  PHOTOS Ulysse Lemerise   Nighttime, April 15, 2025. A thousand volunteers are gathering in Montreal, part of a province-wide effort to try and put numbers on a growing phenomenon in cities like Vancouver, Calgary, Toronto, and many others. The volunteers are getting ready to walk around targeted areas in downtown Montreal and around certain subway stations. Temporary shelters are also visited. First conducted in the spring of 2018, this survey showed that 3,149 people were in a vulnerable situation at the time. Four years later, a similar effort revealed that Montreal’s homeless population had risen to 4,690 people—and that there were some 10,000 people experiencing homelessness in the whole of the province. The 2025 numbers are expected to be significantly higher. For the organizers, this one-night snapshot of the situation is “neither perfect nor complete.” However, for nonprofit organizations and governmental bodies eager to prevent a vulnerable population from ending up on the streets, the informal census does provide highly valuable information.  Two recent initiatives—very different from one another—offer inspiring answers. The most recent one, Le Christin, was designed by Atelier Big City (led by architects Anne Cormier, Randy Cohen, and Howard Davies) and inaugurated in 2024. Studios du PAS, on the other hand, was designed by Montreal firm L. McComber, and welcomed its first tenants in 2022. Both projects involved long-standing charities: the 148-year-old Accueil Bonneau, in the case of Le Christin, and the 136-year-old Mission Old Brewery for Studios du PAS. Le Christin was spearheaded, and mostly financed, by the Société d’habitation et de développement de Montréal (SHDM), a non-profit, para-municipal corporation created in 1988. Studios du PAS was first selected by the City of Montreal to be built thanks to the Rapid Housing Initiative (RHI) program run by the Canada Mortgage and Housing Corporation (CMHC). Le Christin also received a financial contribution from the CMHC towards the end of the process. Boldly coloured blind walls signal the presence of Le Christin in the center of a densely occupied city block, with entrance to the left along Sanguinet Stree. Le Christin Although sited in a very central location, near the buzzing St. Catherine and St. Denis streets, Le Christin is hard to find. And even when one suddenly spots two seven-storey-high walls, coloured lemon-zest yellow and mango orange, it’s difficult to figure out what they are about. A stroll along the tiny Christin Street finally reveals the front façade of this new facility, now home to some of Montreal’s most vulnerable citizens.  View of Le Christin’s modulated front façade. Galvanized steel panels at ground level add a soft touch while protecting the building from potential damage caused by snow plows. Le Christin is unique for a number of reasons. First among them is its highly unusual location—at the centre of a dense city block otherwise occupied by university buildings, office towers, and condo blocks. Until a few years ago, the site was home to the four-storey Appartements Le Riga. The Art Deco-style building had been built in 1914 by developer-architect Joseph-Arthur Godin, who was a pioneer in his own right: he was one of the first in Montreal to experiment with reinforced concrete structures, a novelty in the city at the time. A century later, Le Riga, by then the property of SHDM, was in serious need of repair. Plans had already been drafted for a complete renovation of the building when a thorough investigation revealed major structural problems. Tenants had to leave on short notice and were temporarily relocated; the building was eventually demolished in 2019. By that time, Atelier Big City had been mandated to design a contemporary building that would replace Le Riga and provide a “place of one’s own” to close to 150 tenants, formerly homeless or at risk of becoming so.    Le Christin – Site Plan and Ground Floor Plan The entire operation sparked controversy, particularly as Le Christin started to rise, showing no sign of nostalgia. The architects’ daring approach was difficult to fathom—particularly for those who believe social housing should keep a low profile.  The program, originally meant for a clientele of single men, gradually evolved to include women. In order to reflect societal trends, the architects were asked to design 24 slightly larger units located in the building’s east wing, separated from the rest of the units by secured doors. Thus, Le Christin is able to accommodate homeless couples or close friends, as well as students and immigrants in need. A tenants-only courtyard is inserted in the south façade. In order to provide the maximum number of units requested by SHDM, each of the 90 studios was reduced to 230 square feet—an adjustment from Atelier Big City’s initial, slightly more generous plans. In a clever move, an L-shaped kitchen hugs the corner of each unit, pushing out against the exterior wall. As a result, the window openings recede from the façade, creating a sense of intimacy for the tenants, who enjoy contact with the exterior through large windows protected by quiet Juliet balconies. Far from damaging the initial design, the added constraint of tightened units allowed the architects to modulate the building’s façades, creating an even stronger statement. On the unit levels, corridors include large openings along the south façade. Each floor is colour-coded to enliven the space; overhead, perforated metal plates conceal the mechanical systems. An extra floor was gained thanks to the decision to expose the various plumbing, electrical, and ventilation systems. Well-lit meeting rooms and common areas are found near Le Christin’s front entrance, along with offices for personnel, who are present on the premises 24 hours a day. Apart from a small terrace above the entrance, the main exterior space is a yard which literally cuts into the building’s back façade. This has a huge impact on the interiors at all levels: corridors are generously lit with sunlight, a concept market developers would be well advised to imitate. The adjacent exit stairs are also notable, with their careful detailing and the presence of glazed openings.  The fire stairs, which open onto the exterior yard at ground level, feature glazing that allows for ample natural light. Le Christin has achieved the lofty goal articulated by SHDM’s former director, architect Nancy Schoiry: “With this project, we wanted to innovate and demonstrate that it was possible to provide quality housing for those at risk of homelessness.” The low-slung Studios du PAS aligns with neighbourhood two-storey buildings. Studios du PAS In sharp contrast with Le Christin’s surroundings, the impression one gets approaching Studios du PAS, 14 kilometres east of downtown Montreal, is that of a small town. In this mostly low-scale neighbourhood, L. McComber architects adopted a respectful, subdued approach—blending in, rather than standing out.  The project uses a pared-down palette of terracotta tile, wood, and galvanized steel. The footbridge links the upper level to shared exterior spaces. The financing for this small building, planned for individuals aged 55 or older experiencing or at risk of homelessness, was tied to a highly demanding schedule. The project had to be designed, built, and occupied within 18 months: an “almost impossible” challenge, according to principal architect Laurent McComber. From the very start, prefabrication was favoured over more traditional construction methods. And even though substantial work had to be done on-site—including the installation of the roof, electrical and mechanical systems, as well as exterior and interior finishes—the partially prefabricated components did contribute to keeping costs under control and meeting the 18-month design-to-delivery deadline. Les Studios du PAS The building was divided into 20 identical modules, each fourteen feet wide—the maximum width allowable on the road. Half the modules were installed at ground level. One of these, positioned nearest the street entrance, serves as a community room directly connected to a small office for the use of a social worker, allowing staff to follow up regularly with tenants. Flooded with natural light, the double-height lobby provides a friendly and inclusive welcome. The ground level studios were designed so they could be adapted to accommodate accessibility needs. Some of the ground floor units were adapted to meet the needs of those with a physical disability; the other units were designed to be easily adaptable if needed. All studio apartments, slightly under 300 square feet, include a full bathroom, a minimal kitchen, and sizeable storage space hidden behind cabinet doors. Most of the apartments include a small exterior alcove, which provides an intimate outdoor space while creating a subtle rhythm along the front façade. Inside the studio units, storage cupboards for clothes and belongings were added as an extension of the kitchen wall. Conscious of the tradition of brick residential buildings in Montreal, yet wanting to explore alternate materials, the architects selected an earth-toned terracotta tile from Germany. The 299mm x 1500mm tiles are clipped to the façade, allowing for faster installation and easier maintenance. All units enjoy triple-glazed windows and particularly well insulated walls. A high-performance heat pump was installed to lower energy demand—and costs—for heating and cooling needs. Wood siding was used to soften the upper-level balconies, which provide protected outdoor spaces for residents.   Pride and Dignity Le Christin and Les Studios du PAS have little in common—except, of course, their program. Architecturally speaking, each represents an interesting solution to the problem at hand. While Le Christin is a high-spirited, flamboyant statement, Studios du PAS is to be praised for its respectful attitude, and for the architects’ relentless search for interesting alternatives to traditional construction norms. Atelier Big City is one of few firms in Canada that has the guts—and the talent—to play with bold colours. Decades of experimentation (not just with public buildings, but also within their own homes), led up to Le Christin, which is perhaps their strongest building to date. Their judicious choices of colour, brick type, and materials transmit a message of pride and dignity. Both projects demonstrate enormous respect and generosity to their residents: they provide architecture that treats them not as an underclass, but as regular people, who need the stability of dignified housing to start rebuilding their lives. Odile Hénault is a contributing editor to Canadian Architect.   Le Christin CLIENT Société d’habitation et de développement de Montréal (SHDM) | ARCHITECT TEAM Anne Cormier, Randy Cohen, Howard Davies, Fannie Yockell, Gabriel Tessier, Sébastien St-Laurent, Lisa Vo | STRUCTURAL DPHV | MECHANICAL/ELECTRICAL BPA | CIVIL Genexco | LIGHTING CS Design | AREA 4,115 m2 | Construction BUDGET $18.9 M | COMPLETION November 2023   Les Studios du PAS  CLIENT PAS de la rue | ARCHITECT TEAM L. McComber—Laurent McComber, Olivier Lord, Jérôme Lemieux, Josianne Ouellet-Daudelin, Laurent McComber. Inform—David Grenier, Élisabeth Provost, Amélie Tremblay, David Grenier | PROJECT MANAGEMENT Groupe CDH | STRUCTURAL Douglas Consultants | MECHANICAL/ELECTRICAL Martin Roy & associés | CIVIL Gravitaire | CONTRACTOR Gestion Étoc | AREA 1,035 m2 | BUDGET $3.4 M | COMPLETION September 2022 As appeared in the June 2025 issue of Canadian Architect magazine The post A Place to Call Home: Le Christin and Les Studios du PAS, Montreal, Quebec appeared first on Canadian Architect.
    0 Σχόλια 0 Μοιράστηκε