• Ah, DreamWorks! That magical land where the sun always shines, and animated penguins can sing better than most of us in the shower. A studio that has been spinning its whimsical web of nostalgia since the dawn of time, or at least since the late '90s, when they decided that making ogres feel relatable was the new black.

    So, what's this I hear? A documentary detailing the illustrious history of DreamWorks? Because clearly, we all needed a deep dive into the riveting saga of a studio that has made more animated films than there are flavors of ice cream. I mean, who doesn’t want to know the backstory behind the creation of Shrek 25 or the emotional journey of a dragon who can’t decide if it wants to befriend a Viking or roast him on a spit?

    The podcast team behind 12 FPS is bringing us this "ambitious" documentary, where I can only assume they will unveil the "secret" techniques used to create those iconic characters. Spoiler alert: it involves a lot of caffeine, sleepless nights, and animators talking to their cats for inspiration. Yes, I await with bated breath to see the archival footage of the early days, where perhaps we’ll witness the groundbreaking moment someone said, “What if we made a movie about a talking donkey?” Truly, groundbreaking stuff.

    And let's not overlook the "success" part of their journey. Did we really need a documentary to explain that? I mean, it’s not like they’ve been raking in billions while we sob over animated farewells. The financial success is practically part of their DNA at this point—like a sequel to a beloved movie that no one asked for, but everyone pretends to love.

    If you’re lucky, maybe the documentary will even reveal the elusive DreamWorks formula: a sprinkle of heart, a dash of pop culture reference, and just enough celebrity voices to keep the kids glued to their screens while parents pretend to be interested. Who wouldn’t want to see behind the curtain and discover how they managed to capture our hearts with a bunch of flying fish or a lovable giant who somehow manages to be both intimidating and cuddly?

    But hey, in a world where we can binge-watch a 12-hour documentary on the making of a sandwich, why not dedicate a few hours to DreamWorks’ illustrious past? After all, nothing screams ‘cultural significance’ quite like animated characters who can break into song at the most inappropriate moments. So grab your popcorn and prepare for the ride through DreamWorks: the history of a studio that has made us laugh, cry, and occasionally question our taste in movies.

    #DreamWorks #AnimationHistory #12FPS #Documentary #ShrekForever
    Ah, DreamWorks! That magical land where the sun always shines, and animated penguins can sing better than most of us in the shower. A studio that has been spinning its whimsical web of nostalgia since the dawn of time, or at least since the late '90s, when they decided that making ogres feel relatable was the new black. So, what's this I hear? A documentary detailing the illustrious history of DreamWorks? Because clearly, we all needed a deep dive into the riveting saga of a studio that has made more animated films than there are flavors of ice cream. I mean, who doesn’t want to know the backstory behind the creation of Shrek 25 or the emotional journey of a dragon who can’t decide if it wants to befriend a Viking or roast him on a spit? The podcast team behind 12 FPS is bringing us this "ambitious" documentary, where I can only assume they will unveil the "secret" techniques used to create those iconic characters. Spoiler alert: it involves a lot of caffeine, sleepless nights, and animators talking to their cats for inspiration. Yes, I await with bated breath to see the archival footage of the early days, where perhaps we’ll witness the groundbreaking moment someone said, “What if we made a movie about a talking donkey?” Truly, groundbreaking stuff. And let's not overlook the "success" part of their journey. Did we really need a documentary to explain that? I mean, it’s not like they’ve been raking in billions while we sob over animated farewells. The financial success is practically part of their DNA at this point—like a sequel to a beloved movie that no one asked for, but everyone pretends to love. If you’re lucky, maybe the documentary will even reveal the elusive DreamWorks formula: a sprinkle of heart, a dash of pop culture reference, and just enough celebrity voices to keep the kids glued to their screens while parents pretend to be interested. Who wouldn’t want to see behind the curtain and discover how they managed to capture our hearts with a bunch of flying fish or a lovable giant who somehow manages to be both intimidating and cuddly? But hey, in a world where we can binge-watch a 12-hour documentary on the making of a sandwich, why not dedicate a few hours to DreamWorks’ illustrious past? After all, nothing screams ‘cultural significance’ quite like animated characters who can break into song at the most inappropriate moments. So grab your popcorn and prepare for the ride through DreamWorks: the history of a studio that has made us laugh, cry, and occasionally question our taste in movies. #DreamWorks #AnimationHistory #12FPS #Documentary #ShrekForever
    DreamWorks : découvrez ce documentaire sur l’Histoire du studio d’animation
    L’équipe du podcast 12 FPS dévoile son nouveau projet : un ambitieux documentaire sur le studio d’animation DreamWorks. Des origines aux projets les plus récents, des premières tentatives au succès mondial, vous découvrirez ici les coulis
    Like
    Love
    Wow
    Sad
    Angry
    288
    1 Yorumlar 0 hisse senetleri
  • Will Eleven Die at the End of ‘Stranger Things’?

    Stranger Things fans are worried about the ultimate fate of main character Eleven, played by Millie Bobby Brown, and even some think the teen might not make it out alive at the end of the series.Eleven has been an integral part of the Duffer Brothers’ smash hit Netflix series since it first hit streaming in the summer of 2016.Viewers immediately gravitated toward the show for its spooky atmosphere and mystery-centered plot, nostalgic ’80s vibes and lovable cast of Goonies-esque teen characters.Fans have loved Eleven ever since she made her first appearance in Season 1, Episode 1, “The Vanishing of Will Byers,” and they've watched the unsure, traumatized and quiet young girl transform into a confident, spunky teen with powerful telekinetic abilities over the course of four seasons.Now though, with the series’ fifth and final season set to air later this year, longtime fans are worried about what the end of the show might spell for Eleven, now also known as Jane Hopper.Does Eleven Die in Stranger Things?Nothing about the fate of the core Stranger Things characters is known for sure at this time. However, that hasn’t stopped viewers from theorizing and speculating.During an appearance on U.K. talk show The Jonathan Ross Show in March 2024, Millie Bobby Brown may have inadvertently hinted that her character dies at the end of the show thanks to some questionable phrasing.While discussing the final season, the actress hinted, “I know how she ...” before catching herself and correcting, “I know what happens to my character.”The initial wording of “I know how she” sparked fans’ ears, many of whom thought the actress almost blurted out, “I know how she dies.”Brown also worried fans during a 2024 interview with Capital Radio, when she admitted she discovered her character’s fate after “kind ofmyself into the writers’ room.”“I saw my ending and thought, ‘Oh,’ and then I walked away very slowly,” she cryptically teased.For years fans have speculated about the ending of Stranger Things, particularly about which of the core group might not make it out alive.Some fan theories suggest that Eleven is ultimately doomed, and might be forced to lock herself in the Upside Down forever to close the gate between the Upside Down and the real world, or will die heroically closing the gate and saving her friends and loved ones.Others believe Will Byers, who was the first to venture into the Upside Down and appears to still be connected to it as well as the series’ villain Vecna, will ultimately die in the finale.Of course, these are just fan theories. Hopefully, all the kids end up just fine and there's a big, happy ending!Stranger Things Season 5 will pick up after the epic events of Season 4, in which the kids learned about the evil Vecna, who ended the season by opening a hellish portal between the town of Hawkins and the Upside Down.The fifth season will be released in three parts: The first four episodes will hit Netflix on Nov. 26, three episodes will begin streaming on Dec. 25 and the series finale will air on Dec. 31.Sitcom Moments That Were Surprisingly DarkSitcoms such as The Simpsons and The Golden Girls are often seen as light-hearted comedies, but these darker TV moments offer a different, deeper perspective.Gallery Credit: Ryan ReichardGet our free mobile appREAD MORE: TV Shows Everyone Loves That Are Actually BadChild Stars Who Quit ActingStacker compiled this list of 25 child actors who quit show business, pulling from historical news coverage to include everyone from Mary-Kate and Ashley Olsen to Carrie Henn, who played the little girl in Aliens.Gallery Credit: Sophia June
    #will #eleven #die #end #stranger
    Will Eleven Die at the End of ‘Stranger Things’?
    Stranger Things fans are worried about the ultimate fate of main character Eleven, played by Millie Bobby Brown, and even some think the teen might not make it out alive at the end of the series.Eleven has been an integral part of the Duffer Brothers’ smash hit Netflix series since it first hit streaming in the summer of 2016.Viewers immediately gravitated toward the show for its spooky atmosphere and mystery-centered plot, nostalgic ’80s vibes and lovable cast of Goonies-esque teen characters.Fans have loved Eleven ever since she made her first appearance in Season 1, Episode 1, “The Vanishing of Will Byers,” and they've watched the unsure, traumatized and quiet young girl transform into a confident, spunky teen with powerful telekinetic abilities over the course of four seasons.Now though, with the series’ fifth and final season set to air later this year, longtime fans are worried about what the end of the show might spell for Eleven, now also known as Jane Hopper.Does Eleven Die in Stranger Things?Nothing about the fate of the core Stranger Things characters is known for sure at this time. However, that hasn’t stopped viewers from theorizing and speculating.During an appearance on U.K. talk show The Jonathan Ross Show in March 2024, Millie Bobby Brown may have inadvertently hinted that her character dies at the end of the show thanks to some questionable phrasing.While discussing the final season, the actress hinted, “I know how she ...” before catching herself and correcting, “I know what happens to my character.”The initial wording of “I know how she” sparked fans’ ears, many of whom thought the actress almost blurted out, “I know how she dies.”Brown also worried fans during a 2024 interview with Capital Radio, when she admitted she discovered her character’s fate after “kind ofmyself into the writers’ room.”“I saw my ending and thought, ‘Oh,’ and then I walked away very slowly,” she cryptically teased.For years fans have speculated about the ending of Stranger Things, particularly about which of the core group might not make it out alive.Some fan theories suggest that Eleven is ultimately doomed, and might be forced to lock herself in the Upside Down forever to close the gate between the Upside Down and the real world, or will die heroically closing the gate and saving her friends and loved ones.Others believe Will Byers, who was the first to venture into the Upside Down and appears to still be connected to it as well as the series’ villain Vecna, will ultimately die in the finale.Of course, these are just fan theories. Hopefully, all the kids end up just fine and there's a big, happy ending!Stranger Things Season 5 will pick up after the epic events of Season 4, in which the kids learned about the evil Vecna, who ended the season by opening a hellish portal between the town of Hawkins and the Upside Down.The fifth season will be released in three parts: The first four episodes will hit Netflix on Nov. 26, three episodes will begin streaming on Dec. 25 and the series finale will air on Dec. 31.Sitcom Moments That Were Surprisingly DarkSitcoms such as The Simpsons and The Golden Girls are often seen as light-hearted comedies, but these darker TV moments offer a different, deeper perspective.Gallery Credit: Ryan ReichardGet our free mobile appREAD MORE: TV Shows Everyone Loves That Are Actually BadChild Stars Who Quit ActingStacker compiled this list of 25 child actors who quit show business, pulling from historical news coverage to include everyone from Mary-Kate and Ashley Olsen to Carrie Henn, who played the little girl in Aliens.Gallery Credit: Sophia June #will #eleven #die #end #stranger
    SCREENCRUSH.COM
    Will Eleven Die at the End of ‘Stranger Things’?
    Stranger Things fans are worried about the ultimate fate of main character Eleven, played by Millie Bobby Brown, and even some think the teen might not make it out alive at the end of the series.Eleven has been an integral part of the Duffer Brothers’ smash hit Netflix series since it first hit streaming in the summer of 2016.Viewers immediately gravitated toward the show for its spooky atmosphere and mystery-centered plot, nostalgic ’80s vibes and lovable cast of Goonies-esque teen characters.Fans have loved Eleven ever since she made her first appearance in Season 1, Episode 1, “The Vanishing of Will Byers,” and they've watched the unsure, traumatized and quiet young girl transform into a confident, spunky teen with powerful telekinetic abilities over the course of four seasons.Now though, with the series’ fifth and final season set to air later this year, longtime fans are worried about what the end of the show might spell for Eleven, now also known as Jane Hopper.Does Eleven Die in Stranger Things?Nothing about the fate of the core Stranger Things characters is known for sure at this time. However, that hasn’t stopped viewers from theorizing and speculating.During an appearance on U.K. talk show The Jonathan Ross Show in March 2024, Millie Bobby Brown may have inadvertently hinted that her character dies at the end of the show thanks to some questionable phrasing.While discussing the final season, the actress hinted, “I know how she ...” before catching herself and correcting, “I know what happens to my character.”The initial wording of “I know how she” sparked fans’ ears, many of whom thought the actress almost blurted out, “I know how she dies.”Brown also worried fans during a 2024 interview with Capital Radio, when she admitted she discovered her character’s fate after “kind of [forcing] myself into the writers’ room.”“I saw my ending and thought, ‘Oh,’ and then I walked away very slowly,” she cryptically teased.For years fans have speculated about the ending of Stranger Things, particularly about which of the core group might not make it out alive.Some fan theories suggest that Eleven is ultimately doomed, and might be forced to lock herself in the Upside Down forever to close the gate between the Upside Down and the real world, or will die heroically closing the gate and saving her friends and loved ones.Others believe Will Byers, who was the first to venture into the Upside Down and appears to still be connected to it as well as the series’ villain Vecna, will ultimately die in the finale.Of course, these are just fan theories. Hopefully, all the kids end up just fine and there's a big, happy ending!Stranger Things Season 5 will pick up after the epic events of Season 4, in which the kids learned about the evil Vecna, who ended the season by opening a hellish portal between the town of Hawkins and the Upside Down.The fifth season will be released in three parts: The first four episodes will hit Netflix on Nov. 26, three episodes will begin streaming on Dec. 25 and the series finale will air on Dec. 31.Sitcom Moments That Were Surprisingly DarkSitcoms such as The Simpsons and The Golden Girls are often seen as light-hearted comedies, but these darker TV moments offer a different, deeper perspective.Gallery Credit: Ryan ReichardGet our free mobile appREAD MORE: TV Shows Everyone Loves That Are Actually BadChild Stars Who Quit ActingStacker compiled this list of 25 child actors who quit show business, pulling from historical news coverage to include everyone from Mary-Kate and Ashley Olsen to Carrie Henn, who played the little girl in Aliens.Gallery Credit: Sophia June
    Like
    Love
    Wow
    Angry
    Sad
    242
    0 Yorumlar 0 hisse senetleri
  • How to watch the Wholesome Direct showcase on June 7 at 12PM ET

    Wholesome Direct, an annual showcase of cute and cozy games, is coming back on Saturday, June 7 at 12PM ET. This is a live event that can be streamed via the official YouTube page or Twitch account. The organizers promise to show off "a vibrant lineup of artistic, uplifting, and emotionally resonant games from developers of all sizes from around the world."

    The YouTube stream link is already available, so feel free to bookmark this page and come back on June 7 just in time for the show. Last year's stream was a whole lot of fun. One of the cool things about Wholesome Direct is that the organizers typically make several games available for download immediately after the event, though we don't know which ones will get that sort of VIP treatment this year.
    We only know a few of the games that will be covered during the event. There's an adorable puzzle game called Is This Seat Taken? that tasks players with positioning cute little characters on a bus, in a waiting room or at a restaurant. This one's actually being released by the event's publishing arm, Wholesome Games Presents. Another title is called MakeRoom and reminds me of the indie hit Unpacking, but with a focus on designing the perfect room and sharing that creation with friends.

    The mobile game Usagi Shima is coming to Steam and is getting a prime spot at Wholesome Direct. This title has players transforming a barren island to make it hospitable to lovable bunnies. Minami Lane is already out for Switch, but is also coming to Steam and will be featured during the livestream. It's a town management sim that focuses on one street at a time. It's also extremely easy on the eyes.

    Last year's stream discussed over 30 titles. That leaves plenty of room for cozy surprises. Also, the showcase falls right in the middle of Summer Game Fest, which hosts a group of loosely-affiliated events that begin on June 6.This article originally appeared on Engadget at
    #how #watch #wholesome #direct #showcase
    How to watch the Wholesome Direct showcase on June 7 at 12PM ET
    Wholesome Direct, an annual showcase of cute and cozy games, is coming back on Saturday, June 7 at 12PM ET. This is a live event that can be streamed via the official YouTube page or Twitch account. The organizers promise to show off "a vibrant lineup of artistic, uplifting, and emotionally resonant games from developers of all sizes from around the world." The YouTube stream link is already available, so feel free to bookmark this page and come back on June 7 just in time for the show. Last year's stream was a whole lot of fun. One of the cool things about Wholesome Direct is that the organizers typically make several games available for download immediately after the event, though we don't know which ones will get that sort of VIP treatment this year. We only know a few of the games that will be covered during the event. There's an adorable puzzle game called Is This Seat Taken? that tasks players with positioning cute little characters on a bus, in a waiting room or at a restaurant. This one's actually being released by the event's publishing arm, Wholesome Games Presents. Another title is called MakeRoom and reminds me of the indie hit Unpacking, but with a focus on designing the perfect room and sharing that creation with friends. The mobile game Usagi Shima is coming to Steam and is getting a prime spot at Wholesome Direct. This title has players transforming a barren island to make it hospitable to lovable bunnies. Minami Lane is already out for Switch, but is also coming to Steam and will be featured during the livestream. It's a town management sim that focuses on one street at a time. It's also extremely easy on the eyes. Last year's stream discussed over 30 titles. That leaves plenty of room for cozy surprises. Also, the showcase falls right in the middle of Summer Game Fest, which hosts a group of loosely-affiliated events that begin on June 6.This article originally appeared on Engadget at #how #watch #wholesome #direct #showcase
    WWW.ENGADGET.COM
    How to watch the Wholesome Direct showcase on June 7 at 12PM ET
    Wholesome Direct, an annual showcase of cute and cozy games, is coming back on Saturday, June 7 at 12PM ET. This is a live event that can be streamed via the official YouTube page or Twitch account. The organizers promise to show off "a vibrant lineup of artistic, uplifting, and emotionally resonant games from developers of all sizes from around the world." The YouTube stream link is already available, so feel free to bookmark this page and come back on June 7 just in time for the show. Last year's stream was a whole lot of fun. One of the cool things about Wholesome Direct is that the organizers typically make several games available for download immediately after the event, though we don't know which ones will get that sort of VIP treatment this year. We only know a few of the games that will be covered during the event. There's an adorable puzzle game called Is This Seat Taken? that tasks players with positioning cute little characters on a bus, in a waiting room or at a restaurant. This one's actually being released by the event's publishing arm, Wholesome Games Presents. Another title is called MakeRoom and reminds me of the indie hit Unpacking, but with a focus on designing the perfect room and sharing that creation with friends. The mobile game Usagi Shima is coming to Steam and is getting a prime spot at Wholesome Direct. This title has players transforming a barren island to make it hospitable to lovable bunnies. Minami Lane is already out for Switch, but is also coming to Steam and will be featured during the livestream. It's a town management sim that focuses on one street at a time. It's also extremely easy on the eyes. Last year's stream discussed over 30 titles. That leaves plenty of room for cozy surprises. Also, the showcase falls right in the middle of Summer Game Fest, which hosts a group of loosely-affiliated events that begin on June 6.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/how-to-watch-the-wholesome-direct-showcase-on-june-7-at-12pm-et-181249575.html?src=rss
    0 Yorumlar 0 hisse senetleri
  • ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time

    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey. 
    But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers.

    But wait… there’s more!
    The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film.
    Needless to say, the studio truly became a “Master Builder” on the show.

    The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds. 
    Here's the final trailer:

    In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization.
    Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation?
    Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together. 
    Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell.
    DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with?
    SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers.
    KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying. 

    DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film?
    SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld. 
    The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways.
    When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game.
    KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode.

    DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size.
    KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air.
    After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on.
    SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea. 
    KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it.

    DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work. 
    SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version.
    KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.  
    Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have.

    DS: Did Unreal Engine come into play with her? 
    SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes. 
    Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments.
    KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable.

    DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?” 
    KE: Where do you begin? 
    SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question.
    KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.  
    SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of.

    Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    #minecraft #movie #wētā #helps #adapt
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network. #minecraft #movie #wētā #helps #adapt
    WWW.AWN.COM
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for $2.5 billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmon [Production VFX Supervisor] and Jared Hess [the film’s director]. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys [the thurifers], swinging incense [a thurible]. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    0 Yorumlar 0 hisse senetleri
  • This is not a pipe: UX, AI, and the risk of satisficed product design

    AI’s grip on design forces us to reconsider our role in shaping perception, reality, and—most importantly—decision-making.Image composed in Figma using AI-generated assets.I love a good prototype.You know that old saying—a picture’s worth a thousand words? Well, a prototype is worth a million, especially if you’re a developer, a stakeholder, or a decision-maker trying to make sense of a complex idea with a lot of moving parts.A prototype compresses context. It gives form to the abstract. It invites feedback for iteration and improvement. I’ve built them my whole career, and I still believe they’re the most powerful artifacts in product design.But I’m also starting to worry.The old daysBack in the early days of the web, I used to prototype in hand-coded HTML. Not because I loved code, but because I cared about quality. Browsers were unpredictable animals. Netscape and IE rendered the same markup in wildly different ways. The best we could do was chase consistency through hours of trial and error—hoping somehow that one of us would find and document the answer for the rest.Then Jeffrey Zeldman came along, armed with his famous pop culture wit and transparent brilliance, rallying the web community behind standards and semantic code. And it worked. Slowly, thankfully, the browser makers listened. We built better websites with better languages. HTML became standardized and meaningful under the hood.That was craft.Not just the mechanics of markup, but the intentionality behind it. Craft, to me, is thoughtful execution learned over time. It’s the subtle accumulation of experience, taste, and judgment. It’s a uniquely human achievement.The new nowFast forward to today, and we’re surrounded by tools promising instant output. AI is the new rallying cry, and its promise is both thrilling and disorienting.Tools like Lovable, v0.dev, and Cursor offer prototyping at the speed of thought. With a single prompt, we can summon UI layouts, component libraries, even entire interaction flows. It’s an addictive sort of magic. And in a product world driven by speed and iteration, this kind of acceleration is a godsend.But there’s something quietly unsettling about the ease of it all.Because with great speed comes great risk—perhaps to our users and to our own hard-won standards. And ironically, those who seem to value “craft” as the standard bearers of the current definition—forged exclusively in the conventional tooling of Figma—seem to be the loudest proponents of the new speed.René Magritte, The Treachery of Images. Los Angeles County Museum of Art.This is not a pipeMagritte once painted a pipe and wrote underneath, “Ceci n’est pas une pipe”—This is not a pipe.He was right. It’s just a painting of a pipe, a representation, not the object itself. Postmodern thinkers wasted many French brain cells expanding on this idea, which eventually made its way into popular culture via The Matrix film franchise.In UX, we live and breathe representations. Wireframes, mockups, user flows, prototypes—they’re all stand-ins for future experiences. And yet, stakeholders and product teams often quickly treat them as the final product. The flow becomes the experience. The mockup becomes the truth.Add AI to the mix, and the illusion intensifies exponentially.When an AI-generated interface looks authentic and clickable, it’s dangerously easy to accept it at face value. But what if it’s based on flawed assumptions? What if it reflects patterns that don’t serve our users? What if it simply looks finished, when it’s not even close to holding real value?The risk of satisficingHerbert Simon had a made-up word for this kind of decision-making: satisficing. A blend of “satisfy” and “suffice.” It means settling for a good-enough solution when the perfect one is too costly or too far out of reach.In AI-generated design, satisficing isn’t just a risk—it’s the default.The algorithm gives us something that looks fine, behaves fine, and maybe even tests fine. And in the absence of the right checkpoints for critical thought, we’re liable to ship it. Not because it’s right, but because it’s fast and frictionless.And that worries me.Because over time, we get complacent and stuck in our comfort zones. When that happens, design becomes more template-driven. Interfaces lose connectivity to the humans they’re supposed to serve. And worst of all, we stop asking why.Diagram inspired by Herbert Simon’s model of bounded rationality. Created by author.Shifting timesNow, there’s nothing inherently wrong about satisficed decision making. In fact, Simon viewed the term practically—recognizing that humans, limited in time, knowledge, and processing capacity, operate within what he called a “bounded rationality.”In agile product design, this is the whole point of an MVP.The problem arises when we’re out of sync with one another, when one discipline overrides the other with disregard, deciding that something is “good enough” without considering the wider trade offs.The optimist in me wants to believe we’re well-suited and prepared for this inevitability.I’m currently one of those displaced knowledge workers, looking for my next opportunity in UX / Product Design. I’ve seen the shift from using the term UX Designer to Product Designer in the job descriptions. Leaving the organizational debates and the shameful clickbait aside, this shift seems to signal a natural evolution—traditional UX design roles are moving deeper into product delivery.But if design and product are becoming equal partners in the organizational chart, then our collective vision should be to make decisions together, without being a consensus machine. That means mapping out our processes and synthesizing data into rational decisions within a new bounded reality—one that’s accelerated from the start.Because the point isn’t to eliminate satisficing. It’s to make it conscious, collaborative, and aligned. UX and design professionals need to be embedded in the conversation—not just reacting to outputs, but helping frame the questions and the goals. Otherwise, speed wins by default—leaving craft, context, and care lost in the latest sprint.The new frontierI’m not anti-AI. Quite the opposite. I’m genuinely excited about what these tools can unlock—especially in early design stages, where low fidelity and high experimentation are crucial. We should be moving faster. We should be looking at and testing more ideas. We should be using AI to remove blockers and free up energy for deeper thinking.But we also need to stay alert. We need to protect the human-centered insights and the basic fundamentals of context and critical thought that live outside the models.We can’t let the ease of generation become a substitute for our better judgment. We can’t let groupthink dictate taste. We can’t let empathy get stripped from the process just because the output looks like a viable product to the loudest person in the room.As designers, our job is not just to create. It’s to question. To inform. To shape. To provoke. To guide.And sometimes, to remind the team… This is not a pipe.This is not a pipe: UX, AI, and the risk of satisficed product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #this #not #pipe #risk #satisficed
    This is not a pipe: UX, AI, and the risk of satisficed product design
    AI’s grip on design forces us to reconsider our role in shaping perception, reality, and—most importantly—decision-making.Image composed in Figma using AI-generated assets.I love a good prototype.You know that old saying—a picture’s worth a thousand words? Well, a prototype is worth a million, especially if you’re a developer, a stakeholder, or a decision-maker trying to make sense of a complex idea with a lot of moving parts.A prototype compresses context. It gives form to the abstract. It invites feedback for iteration and improvement. I’ve built them my whole career, and I still believe they’re the most powerful artifacts in product design.But I’m also starting to worry.The old daysBack in the early days of the web, I used to prototype in hand-coded HTML. Not because I loved code, but because I cared about quality. Browsers were unpredictable animals. Netscape and IE rendered the same markup in wildly different ways. The best we could do was chase consistency through hours of trial and error—hoping somehow that one of us would find and document the answer for the rest.Then Jeffrey Zeldman came along, armed with his famous pop culture wit and transparent brilliance, rallying the web community behind standards and semantic code. And it worked. Slowly, thankfully, the browser makers listened. We built better websites with better languages. HTML became standardized and meaningful under the hood.That was craft.Not just the mechanics of markup, but the intentionality behind it. Craft, to me, is thoughtful execution learned over time. It’s the subtle accumulation of experience, taste, and judgment. It’s a uniquely human achievement.The new nowFast forward to today, and we’re surrounded by tools promising instant output. AI is the new rallying cry, and its promise is both thrilling and disorienting.Tools like Lovable, v0.dev, and Cursor offer prototyping at the speed of thought. With a single prompt, we can summon UI layouts, component libraries, even entire interaction flows. It’s an addictive sort of magic. And in a product world driven by speed and iteration, this kind of acceleration is a godsend.But there’s something quietly unsettling about the ease of it all.Because with great speed comes great risk—perhaps to our users and to our own hard-won standards. And ironically, those who seem to value “craft” as the standard bearers of the current definition—forged exclusively in the conventional tooling of Figma—seem to be the loudest proponents of the new speed.René Magritte, The Treachery of Images. Los Angeles County Museum of Art.This is not a pipeMagritte once painted a pipe and wrote underneath, “Ceci n’est pas une pipe”—This is not a pipe.He was right. It’s just a painting of a pipe, a representation, not the object itself. Postmodern thinkers wasted many French brain cells expanding on this idea, which eventually made its way into popular culture via The Matrix film franchise.In UX, we live and breathe representations. Wireframes, mockups, user flows, prototypes—they’re all stand-ins for future experiences. And yet, stakeholders and product teams often quickly treat them as the final product. The flow becomes the experience. The mockup becomes the truth.Add AI to the mix, and the illusion intensifies exponentially.When an AI-generated interface looks authentic and clickable, it’s dangerously easy to accept it at face value. But what if it’s based on flawed assumptions? What if it reflects patterns that don’t serve our users? What if it simply looks finished, when it’s not even close to holding real value?The risk of satisficingHerbert Simon had a made-up word for this kind of decision-making: satisficing. A blend of “satisfy” and “suffice.” It means settling for a good-enough solution when the perfect one is too costly or too far out of reach.In AI-generated design, satisficing isn’t just a risk—it’s the default.The algorithm gives us something that looks fine, behaves fine, and maybe even tests fine. And in the absence of the right checkpoints for critical thought, we’re liable to ship it. Not because it’s right, but because it’s fast and frictionless.And that worries me.Because over time, we get complacent and stuck in our comfort zones. When that happens, design becomes more template-driven. Interfaces lose connectivity to the humans they’re supposed to serve. And worst of all, we stop asking why.Diagram inspired by Herbert Simon’s model of bounded rationality. Created by author.Shifting timesNow, there’s nothing inherently wrong about satisficed decision making. In fact, Simon viewed the term practically—recognizing that humans, limited in time, knowledge, and processing capacity, operate within what he called a “bounded rationality.”In agile product design, this is the whole point of an MVP.The problem arises when we’re out of sync with one another, when one discipline overrides the other with disregard, deciding that something is “good enough” without considering the wider trade offs.The optimist in me wants to believe we’re well-suited and prepared for this inevitability.I’m currently one of those displaced knowledge workers, looking for my next opportunity in UX / Product Design. I’ve seen the shift from using the term UX Designer to Product Designer in the job descriptions. Leaving the organizational debates and the shameful clickbait aside, this shift seems to signal a natural evolution—traditional UX design roles are moving deeper into product delivery.But if design and product are becoming equal partners in the organizational chart, then our collective vision should be to make decisions together, without being a consensus machine. That means mapping out our processes and synthesizing data into rational decisions within a new bounded reality—one that’s accelerated from the start.Because the point isn’t to eliminate satisficing. It’s to make it conscious, collaborative, and aligned. UX and design professionals need to be embedded in the conversation—not just reacting to outputs, but helping frame the questions and the goals. Otherwise, speed wins by default—leaving craft, context, and care lost in the latest sprint.The new frontierI’m not anti-AI. Quite the opposite. I’m genuinely excited about what these tools can unlock—especially in early design stages, where low fidelity and high experimentation are crucial. We should be moving faster. We should be looking at and testing more ideas. We should be using AI to remove blockers and free up energy for deeper thinking.But we also need to stay alert. We need to protect the human-centered insights and the basic fundamentals of context and critical thought that live outside the models.We can’t let the ease of generation become a substitute for our better judgment. We can’t let groupthink dictate taste. We can’t let empathy get stripped from the process just because the output looks like a viable product to the loudest person in the room.As designers, our job is not just to create. It’s to question. To inform. To shape. To provoke. To guide.And sometimes, to remind the team… This is not a pipe.This is not a pipe: UX, AI, and the risk of satisficed product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #this #not #pipe #risk #satisficed
    UXDESIGN.CC
    This is not a pipe: UX, AI, and the risk of satisficed product design
    AI’s grip on design forces us to reconsider our role in shaping perception, reality, and—most importantly—decision-making.Image composed in Figma using AI-generated assets.I love a good prototype.You know that old saying—a picture’s worth a thousand words? Well, a prototype is worth a million, especially if you’re a developer, a stakeholder, or a decision-maker trying to make sense of a complex idea with a lot of moving parts.A prototype compresses context. It gives form to the abstract. It invites feedback for iteration and improvement. I’ve built them my whole career, and I still believe they’re the most powerful artifacts in product design.But I’m also starting to worry.The old daysBack in the early days of the web, I used to prototype in hand-coded HTML. Not because I loved code, but because I cared about quality. Browsers were unpredictable animals. Netscape and IE rendered the same markup in wildly different ways. The best we could do was chase consistency through hours of trial and error—hoping somehow that one of us would find and document the answer for the rest.Then Jeffrey Zeldman came along, armed with his famous pop culture wit and transparent brilliance, rallying the web community behind standards and semantic code. And it worked. Slowly, thankfully, the browser makers listened. We built better websites with better languages. HTML became standardized and meaningful under the hood.That was craft.Not just the mechanics of markup, but the intentionality behind it. Craft, to me, is thoughtful execution learned over time. It’s the subtle accumulation of experience, taste, and judgment. It’s a uniquely human achievement.The new nowFast forward to today, and we’re surrounded by tools promising instant output. AI is the new rallying cry, and its promise is both thrilling and disorienting.Tools like Lovable, v0.dev, and Cursor offer prototyping at the speed of thought. With a single prompt, we can summon UI layouts, component libraries, even entire interaction flows. It’s an addictive sort of magic. And in a product world driven by speed and iteration, this kind of acceleration is a godsend.But there’s something quietly unsettling about the ease of it all.Because with great speed comes great risk—perhaps to our users and to our own hard-won standards. And ironically, those who seem to value “craft” as the standard bearers of the current definition—forged exclusively in the conventional tooling of Figma—seem to be the loudest proponents of the new speed.René Magritte, The Treachery of Images (1929). Los Angeles County Museum of Art.This is not a pipeMagritte once painted a pipe and wrote underneath, “Ceci n’est pas une pipe”—This is not a pipe.He was right. It’s just a painting of a pipe, a representation, not the object itself. Postmodern thinkers wasted many French brain cells expanding on this idea, which eventually made its way into popular culture via The Matrix film franchise.In UX, we live and breathe representations. Wireframes, mockups, user flows, prototypes—they’re all stand-ins for future experiences. And yet, stakeholders and product teams often quickly treat them as the final product. The flow becomes the experience. The mockup becomes the truth.Add AI to the mix, and the illusion intensifies exponentially.When an AI-generated interface looks authentic and clickable, it’s dangerously easy to accept it at face value. But what if it’s based on flawed assumptions? What if it reflects patterns that don’t serve our users? What if it simply looks finished, when it’s not even close to holding real value?The risk of satisficingHerbert Simon had a made-up word for this kind of decision-making: satisficing. A blend of “satisfy” and “suffice.” It means settling for a good-enough solution when the perfect one is too costly or too far out of reach.In AI-generated design, satisficing isn’t just a risk—it’s the default.The algorithm gives us something that looks fine, behaves fine, and maybe even tests fine. And in the absence of the right checkpoints for critical thought, we’re liable to ship it. Not because it’s right, but because it’s fast and frictionless.And that worries me.Because over time, we get complacent and stuck in our comfort zones. When that happens, design becomes more template-driven. Interfaces lose connectivity to the humans they’re supposed to serve. And worst of all, we stop asking why.Diagram inspired by Herbert Simon’s model of bounded rationality. Created by author.Shifting times (and how we respond)Now, there’s nothing inherently wrong about satisficed decision making. In fact, Simon viewed the term practically—recognizing that humans, limited in time, knowledge, and processing capacity, operate within what he called a “bounded rationality.”In agile product design, this is the whole point of an MVP.The problem arises when we’re out of sync with one another, when one discipline overrides the other with disregard, deciding that something is “good enough” without considering the wider trade offs.The optimist in me wants to believe we’re well-suited and prepared for this inevitability.I’m currently one of those displaced knowledge workers, looking for my next opportunity in UX / Product Design. I’ve seen the shift from using the term UX Designer to Product Designer in the job descriptions. Leaving the organizational debates and the shameful clickbait aside, this shift seems to signal a natural evolution—traditional UX design roles are moving deeper into product delivery.But if design and product are becoming equal partners in the organizational chart, then our collective vision should be to make decisions together, without being a consensus machine. That means mapping out our processes and synthesizing data into rational decisions within a new bounded reality—one that’s accelerated from the start.Because the point isn’t to eliminate satisficing. It’s to make it conscious, collaborative, and aligned. UX and design professionals need to be embedded in the conversation—not just reacting to outputs, but helping frame the questions and the goals. Otherwise, speed wins by default—leaving craft, context, and care lost in the latest sprint.The new frontierI’m not anti-AI. Quite the opposite. I’m genuinely excited about what these tools can unlock—especially in early design stages, where low fidelity and high experimentation are crucial. We should be moving faster. We should be looking at and testing more ideas. We should be using AI to remove blockers and free up energy for deeper thinking.But we also need to stay alert. We need to protect the human-centered insights and the basic fundamentals of context and critical thought that live outside the models.We can’t let the ease of generation become a substitute for our better judgment. We can’t let groupthink dictate taste. We can’t let empathy get stripped from the process just because the output looks like a viable product to the loudest person in the room.As designers, our job is not just to create. It’s to question. To inform. To shape. To provoke. To guide.And sometimes, to remind the team… This is not a pipe.This is not a pipe: UX, AI, and the risk of satisficed product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    12 Yorumlar 0 hisse senetleri
  • Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 

    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake.
    And this time, the pecking order is reversed.

    Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the million in a weekend a month earlier. At the time, Minority Report did slightly better with million versus Lilo’s million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick. Furthermore, Lilo earned a jaw-dropping million worldwide.
    Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at million across its first four days, and million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes.

    For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made million.
    In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high. In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience.
    After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoningopened below million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission.
    At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien.
    Whether it is enough to justify the rumored million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning
    #mission #impossible #box #office #deja
    Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 
    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake. And this time, the pecking order is reversed. Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the million in a weekend a month earlier. At the time, Minority Report did slightly better with million versus Lilo’s million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick. Furthermore, Lilo earned a jaw-dropping million worldwide. Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at million across its first four days, and million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes. For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made million. In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high. In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience. After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoningopened below million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission. At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien. Whether it is enough to justify the rumored million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning #mission #impossible #box #office #deja
    WWW.DENOFGEEK.COM
    Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 
    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake. And this time, the pecking order is reversed. Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of $35 million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the $100 million in a weekend a month earlier. At the time, Minority Report did slightly better with $35.7 million versus Lilo’s $35.2 million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated $180 million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick ($161 million in 2022). Furthermore, Lilo earned a jaw-dropping $342 million worldwide. Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at $77 million across its first four days, and $63 million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan $400 million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes. For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to $61 million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear $70 million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made $58 million (or about $108 million in 2025 dollars). In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to $77 million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high (consider that according to Deadline, Final Reckoning’s biggest demo was with audience members over the age of 55!). In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience. After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoning (an “A” vs. an “A-”) opened below $55 million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission. At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien. Whether it is enough to justify the rumored $400 million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning
    0 Yorumlar 0 hisse senetleri
  • I tried vibe coding the most popular apps with Lovable

    Lovable is one of the newest tools that lets you vibe code out ideas. It's becoming incredibly easy to build your own MVP with tools like lovable even in a single prompt. In this video I'm going to show you some of the basics of vibe coding by coding out some of the most popular apps like Audible, YouTube etc and nd that I've also created in the past like RPG tracker!

    If you want to learn more or try Lovable yourself, check it out below:
    /

    Also a big thanks to Lovable for sponsoring this video and making it and others like it happen on this channel!

    #lovable #ai #coding

    Want to learn more? Check out my courses!
    Teach Me Design - Course: /
    OpenAI + GPT - Course & Templates: /
    #tried #vibe #coding #most #popular
    I tried vibe coding the most popular apps with Lovable
    Lovable is one of the newest tools that lets you vibe code out ideas. It's becoming incredibly easy to build your own MVP with tools like lovable even in a single prompt. In this video I'm going to show you some of the basics of vibe coding by coding out some of the most popular apps like Audible, YouTube etc and nd that I've also created in the past like RPG tracker! If you want to learn more or try Lovable yourself, check it out below: / Also a big thanks to Lovable for sponsoring this video and making it and others like it happen on this channel! #lovable #ai #coding Want to learn more? ⭐ Check out my courses! ⭐ 📘 Teach Me Design - Course: / 📚 OpenAI + GPT - Course & Templates: / #tried #vibe #coding #most #popular
    WWW.YOUTUBE.COM
    I tried vibe coding the most popular apps with Lovable
    Lovable is one of the newest tools that lets you vibe code out ideas. It's becoming incredibly easy to build your own MVP with tools like lovable even in a single prompt. In this video I'm going to show you some of the basics of vibe coding by coding out some of the most popular apps like Audible, YouTube etc and nd that I've also created in the past like RPG tracker! If you want to learn more or try Lovable yourself, check it out below: https://lovable.dev/ Also a big thanks to Lovable for sponsoring this video and making it and others like it happen on this channel! #lovable #ai #coding Want to learn more? ⭐ Check out my courses! ⭐ 📘 Teach Me Design - Course: https://www.enhanceui.com/ 📚 OpenAI + GPT - Course & Templates: https://enhanceui.gumroad.com/
    0 Yorumlar 0 hisse senetleri
  • This Phone-Sized E-Reader Helped Me Smash My 2025 Reading Goal

    We may earn a commission from links on this page.I used to regularly read more than 125 books a year, each meticulously logged on my Goodreads profile. I read during my commute and to wind down at night. I always had a paperback in my bag or an audiobook in my ears.Then I got a smartphone. Then I got on Twitter. Then the 2016 presidential election happened. Then there was a pandemic, and for a while I stopped commuting altogether.With every year, it seemed like there were more things to spiral about online, and less hours in the day to relax with a novel or read some stimulating non-fiction. Suddenly I found it hard to meet my much more modest reading goals, which dropped to 75, then 50, then 30 books a year. In 2023 and 2024, I set my sights on finishing just 20 books. I still had to cram at the end of the year to manage even that comparatively sluggish pace.But things are different in 2025. It's May, and I've already met my 20-book reading goal, and I owe it all to my Boox Palma 2, a phone-shaped e-reader I can easily carry with me wherever I go.

    Boox Palma 2 E-Reader

    Shop Now

    Shop Now

    A device so good it has a cult followingAs I noted in my review of the Boox Palma—the now discontinued, nearly identical predecessor to the Palma 2—it's one of the most lovable electronic devices I have ever owned. It's a near perfect marriage of formand function—with an open Android operating system and access to the Google Play store, you can use it to run reading apps from a variety of retailers, listen to audiobooks with Bluetooth headphones, or get a little work done on productivity apps like Gmail and Google Docs. At a time when increasing numbers of people are opting to make the switch to a "dumb phone" to escape the pull of their screen addictions, the Palma occupies a rather unique spot in the market: While it can do a great deal more than your standard Kindle, it still feels clunky and slow in comparison to your smartphone, but in the best way. It doesn't have a cellular connection, so if you aren't on wifi, you'll be unable to use the internet or update your social feeds. The black and white display means using it is soothing instead of stimulating, while still scratching that "gotta pull out my device" itch. Its quirky qualities have garnered it a cult following of sorts.The perfect form factorLeaving aside all the things social media and app developers do to make their products addictive, I struggle with regulating my phone use for the sole reason that my phone is always with me. It's how I keep in touch with my spouse and kids and it has effectively replaced my wallet, therefore it must be in my pocket at all times and hey, I might as well pull it out at every idle moment to check my notifications. Yes, I could carry a book or a standard-sized e-reader to look at instead, but that requires carrying a bag of some kind, and it's hard to beat the convenience of something you can shove into any pair of jeans.Well, the Palma 2 can be shoved into any pair of jeans. It has basically the identical form factor as most smartphones, and can even occupy the same pocket as my iPhone 14. This means that when I'm standing in line at the post office, or waiting for the train, or trying to maintain my balance on the train and with only one hand free, I can effortlessly pull out my e-reader instead of my phone and absorb a few pages rather than frantically trying to refresh my Bluesky feed at subway stops.Slow and kinda clunkyIf the Palma 2 can access the Google Play store, what's to keep you from loading it up with all of the apps that already make your smartphone so addictive? Nothing! Go for it—stick Bluesky on there. Add Facebook and Instagram if you've yet to flee Meta's ecosystem. You can even load up video-based apps like YouTube and Netflix and time-wasting games like Subway Surfers.If you do, though, you'll quickly find that none of them are that enjoyable to use. Though Boox readers' e-ink displays employ variable refresh rate tech that makes them infinitely faster than early generation Kindles, even in the fastest modes they are only a fraction as responsive as a phone or tablet's LED screen. So while you certainly can use your Palma 2 to scroll social media or watch a few TikToks, you won't particularly want to, because it's kind of bad at them, but in a way I love: The device is optimized for reading text or comics, and it presents that material so well, and so conveniently, that I want to carry it around with me everywhere so I can read on it all the time.So far, it's going well: As I said, I've already hit my 20-book reading goal for the year. In the meantime, if you're looking for books you can binge to get you out of a doomscrolling funk, I recommend the Dungeon Crawler Carl series by Matt Dinniman. After picking up the first one in February, I blew through the seven thus-released booksin about six weeks. And yes, I read every word of them on my Palma 2.
    #this #phonesized #ereader #helped #smash
    This Phone-Sized E-Reader Helped Me Smash My 2025 Reading Goal
    We may earn a commission from links on this page.I used to regularly read more than 125 books a year, each meticulously logged on my Goodreads profile. I read during my commute and to wind down at night. I always had a paperback in my bag or an audiobook in my ears.Then I got a smartphone. Then I got on Twitter. Then the 2016 presidential election happened. Then there was a pandemic, and for a while I stopped commuting altogether.With every year, it seemed like there were more things to spiral about online, and less hours in the day to relax with a novel or read some stimulating non-fiction. Suddenly I found it hard to meet my much more modest reading goals, which dropped to 75, then 50, then 30 books a year. In 2023 and 2024, I set my sights on finishing just 20 books. I still had to cram at the end of the year to manage even that comparatively sluggish pace.But things are different in 2025. It's May, and I've already met my 20-book reading goal, and I owe it all to my Boox Palma 2, a phone-shaped e-reader I can easily carry with me wherever I go. Boox Palma 2 E-Reader Shop Now Shop Now A device so good it has a cult followingAs I noted in my review of the Boox Palma—the now discontinued, nearly identical predecessor to the Palma 2—it's one of the most lovable electronic devices I have ever owned. It's a near perfect marriage of formand function—with an open Android operating system and access to the Google Play store, you can use it to run reading apps from a variety of retailers, listen to audiobooks with Bluetooth headphones, or get a little work done on productivity apps like Gmail and Google Docs. At a time when increasing numbers of people are opting to make the switch to a "dumb phone" to escape the pull of their screen addictions, the Palma occupies a rather unique spot in the market: While it can do a great deal more than your standard Kindle, it still feels clunky and slow in comparison to your smartphone, but in the best way. It doesn't have a cellular connection, so if you aren't on wifi, you'll be unable to use the internet or update your social feeds. The black and white display means using it is soothing instead of stimulating, while still scratching that "gotta pull out my device" itch. Its quirky qualities have garnered it a cult following of sorts.The perfect form factorLeaving aside all the things social media and app developers do to make their products addictive, I struggle with regulating my phone use for the sole reason that my phone is always with me. It's how I keep in touch with my spouse and kids and it has effectively replaced my wallet, therefore it must be in my pocket at all times and hey, I might as well pull it out at every idle moment to check my notifications. Yes, I could carry a book or a standard-sized e-reader to look at instead, but that requires carrying a bag of some kind, and it's hard to beat the convenience of something you can shove into any pair of jeans.Well, the Palma 2 can be shoved into any pair of jeans. It has basically the identical form factor as most smartphones, and can even occupy the same pocket as my iPhone 14. This means that when I'm standing in line at the post office, or waiting for the train, or trying to maintain my balance on the train and with only one hand free, I can effortlessly pull out my e-reader instead of my phone and absorb a few pages rather than frantically trying to refresh my Bluesky feed at subway stops.Slow and kinda clunkyIf the Palma 2 can access the Google Play store, what's to keep you from loading it up with all of the apps that already make your smartphone so addictive? Nothing! Go for it—stick Bluesky on there. Add Facebook and Instagram if you've yet to flee Meta's ecosystem. You can even load up video-based apps like YouTube and Netflix and time-wasting games like Subway Surfers.If you do, though, you'll quickly find that none of them are that enjoyable to use. Though Boox readers' e-ink displays employ variable refresh rate tech that makes them infinitely faster than early generation Kindles, even in the fastest modes they are only a fraction as responsive as a phone or tablet's LED screen. So while you certainly can use your Palma 2 to scroll social media or watch a few TikToks, you won't particularly want to, because it's kind of bad at them, but in a way I love: The device is optimized for reading text or comics, and it presents that material so well, and so conveniently, that I want to carry it around with me everywhere so I can read on it all the time.So far, it's going well: As I said, I've already hit my 20-book reading goal for the year. In the meantime, if you're looking for books you can binge to get you out of a doomscrolling funk, I recommend the Dungeon Crawler Carl series by Matt Dinniman. After picking up the first one in February, I blew through the seven thus-released booksin about six weeks. And yes, I read every word of them on my Palma 2. #this #phonesized #ereader #helped #smash
    LIFEHACKER.COM
    This Phone-Sized E-Reader Helped Me Smash My 2025 Reading Goal
    We may earn a commission from links on this page.I used to regularly read more than 125 books a year, each meticulously logged on my Goodreads profile. I read during my commute and to wind down at night. I always had a paperback in my bag or an audiobook in my ears.Then I got a smartphone. Then I got on Twitter. Then the 2016 presidential election happened. Then there was a pandemic, and for a while I stopped commuting altogether.With every year, it seemed like there were more things to spiral about online, and less hours in the day to relax with a novel or read some stimulating non-fiction. Suddenly I found it hard to meet my much more modest reading goals, which dropped to 75, then 50, then 30 books a year. In 2023 and 2024, I set my sights on finishing just 20 books (including graphic novels and stuff I read aloud to my kids). I still had to cram at the end of the year to manage even that comparatively sluggish pace.But things are different in 2025. It's May, and I've already met my 20-book reading goal (which I'll soon be increasing), and I owe it all to my Boox Palma 2, a phone-shaped e-reader I can easily carry with me wherever I go. Boox Palma 2 E-Reader $299.99 at Amazon Shop Now Shop Now $299.99 at Amazon A device so good it has a cult followingAs I noted in my review of the Boox Palma—the now discontinued, nearly identical predecessor to the Palma 2—it's one of the most lovable electronic devices I have ever owned. It's a near perfect marriage of form (the easy-on-the-eyes e-ink screen popularized by Amazon's Kindle, a compact size) and function—with an open Android operating system and access to the Google Play store, you can use it to run reading apps from a variety of retailers, listen to audiobooks with Bluetooth headphones, or get a little work done on productivity apps like Gmail and Google Docs. At a time when increasing numbers of people are opting to make the switch to a "dumb phone" to escape the pull of their screen addictions, the Palma occupies a rather unique spot in the market: While it can do a great deal more than your standard Kindle, it still feels clunky and slow in comparison to your smartphone, but in the best way. It doesn't have a cellular connection, so if you aren't on wifi, you'll be unable to use the internet or update your social feeds. The black and white display means using it is soothing instead of stimulating, while still scratching that "gotta pull out my device" itch. Its quirky qualities have garnered it a cult following of sorts (ironically, adherents gather to discuss the device on Reddit and TikTok, two places to avoid if you want to get any reading done).The perfect form factorLeaving aside all the things social media and app developers do to make their products addictive, I struggle with regulating my phone use for the sole reason that my phone is always with me. It's how I keep in touch with my spouse and kids and it has effectively replaced my wallet, therefore it must be in my pocket at all times and hey, I might as well pull it out at every idle moment to check my notifications. Yes, I could carry a book or a standard-sized e-reader to look at instead, but that requires carrying a bag of some kind (or large pockets), and it's hard to beat the convenience of something you can shove into any pair of jeans.Well, the Palma 2 can be shoved into any pair of jeans. It has basically the identical form factor as most smartphones, and can even occupy the same pocket as my iPhone 14. This means that when I'm standing in line at the post office, or waiting for the train, or trying to maintain my balance on the train and with only one hand free, I can effortlessly pull out my e-reader instead of my phone and absorb a few pages rather than frantically trying to refresh my Bluesky feed at subway stops.Slow and kinda clunky (in a good way)If the Palma 2 can access the Google Play store, what's to keep you from loading it up with all of the apps that already make your smartphone so addictive? Nothing! Go for it—stick Bluesky on there. Add Facebook and Instagram if you've yet to flee Meta's ecosystem. You can even load up video-based apps like YouTube and Netflix and time-wasting games like Subway Surfers.If you do, though, you'll quickly find that none of them are that enjoyable to use. Though Boox readers' e-ink displays employ variable refresh rate tech that makes them infinitely faster than early generation Kindles (where you could pause for a heartbeat between pressing a key on the virtual keyboard and actually seeing the text appear on the screen), even in the fastest modes they are only a fraction as responsive as a phone or tablet's LED screen. So while you certainly can use your Palma 2 to scroll social media or watch a few TikToks, you won't particularly want to, because it's kind of bad at them, but in a way I love: The device is optimized for reading text or comics (particularly black and white manga), and it presents that material so well, and so conveniently, that I want to carry it around with me everywhere so I can read on it all the time.So far, it's going well: As I said, I've already hit my 20-book reading goal for the year. In the meantime, if you're looking for books you can binge to get you out of a doomscrolling funk, I recommend the Dungeon Crawler Carl series by Matt Dinniman. After picking up the first one in February, I blew through the seven thus-released books (ranging in length from 400 to 800 pages) in about six weeks. And yes, I read every word of them on my Palma 2.
    0 Yorumlar 0 hisse senetleri
  • Design in the age of vibes

    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media LabWhat AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMsare getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe codingWhat this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefsin order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercisejudgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #design #age #vibes
    Design in the age of vibes
    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media LabWhat AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMsare getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe codingWhat this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefsin order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercisejudgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #design #age #vibes
    UXDESIGN.CC
    Design in the age of vibes
    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media Lab (Jan 2016)What AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe coding (image via Giphy)What this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0 (like Tailwind) are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefs (or current briefs in parallel) in order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercise (your) judgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Yorumlar 0 hisse senetleri