• So, the only game that truly captures the essence of John Wick, "John Wick Hex," is being removed in a mere 72 hours. Just when you thought you could spend your weekends living out your action movie fantasies, the universe decides to pull the rug out from under you. Who needs tactical role-playing games that make you feel like a badass when you can just watch Keanu Reeves do it on repeat?

    It's almost poetic, really. A game that nailed the vibes of those high-octane movies is about to vanish as suddenly as a villain in a well-timed headshot. But hey, at least we have the memories—until they fade away, too!

    #JohnWickHex #GamingNews #KeanuRe
    So, the only game that truly captures the essence of John Wick, "John Wick Hex," is being removed in a mere 72 hours. Just when you thought you could spend your weekends living out your action movie fantasies, the universe decides to pull the rug out from under you. Who needs tactical role-playing games that make you feel like a badass when you can just watch Keanu Reeves do it on repeat? It's almost poetic, really. A game that nailed the vibes of those high-octane movies is about to vanish as suddenly as a villain in a well-timed headshot. But hey, at least we have the memories—until they fade away, too! #JohnWickHex #GamingNews #KeanuRe
    KOTAKU.COM
    The Best And Only Available John Wick Game Is Being Removed In 72 Hours
    John Wick Hex, released in 2019, was a tactical role-playing game that recreated the vibes and feel of the Keanu Reeves-starring action movies perfectly. And now, with little warning, John Wick Hex is being delisted from all platforms. Read more...
    Like
    Love
    Wow
    Angry
    Sad
    61
    1 Comments 0 Shares 0 Reviews
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Comments 0 Shares 0 Reviews
  • The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Comments 0 Shares 0 Reviews
  • VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV

    By JENNIFER CHAMPAGNE

    House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship.

    Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters.

    The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.”

    The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity.

    Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin.

    The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.”

    Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.”

    The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic.

    Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements.

    Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status.

    Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger.

    For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them.

    Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.”

    While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.”

    “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon

    The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.”

    On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    #vfx #emmy #contenders #setting #benchmark
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.” #vfx #emmy #contenders #setting #benchmark
    WWW.VFXVOICE.COM
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare. (Image courtesy of HBO) The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis.  (Image courtesy of HBO) For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise. (Image courtesy of Apple TV+) One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement. (Image courtesy of Prime Video) The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.(Images courtesy of HBO) The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism. (Image courtesy of Prime Video) Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging. (Photo: Erin Simkin. Courtesy of Netflix) Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding and [Lila’s] Agony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets. (Image courtesy of HBO) American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier. (Photo: Justin Lubin. Courtesy of Netflix) For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision. (Image courtesy of Prime Video) Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building. (Image courtesy of Apple TV+) For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857. (Image courtesy of Netflix) The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery. (Photo: Jessica Brooks. Courtesy of Netflix) Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed by [Production Designer] Jim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity. (Image courtesy of Prime Video) In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets. (Images courtesy of HBO) The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. (Image courtesy of HBO) Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date. (Photo: Theo Whiteman. Courtesy of HBO) For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    Like
    Love
    Wow
    Sad
    Angry
    149
    0 Comments 0 Shares 0 Reviews
  • Design to Code with the Figma MCP Server

    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code.
    Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky.
    You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size?
    There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally.
    Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design
    Get the MCP server running in CursorSet up a quick target repo
    Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings.
    Go to the Security tab.
    Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{
    "mcpServers": {
    "Framelink Figma MCP": {
    "command": "npx",
    "args":}
    }
    }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface.

    Figma frame: <;

    Please use the Figma MCP server.

    Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */}
    <div className="p-6 bg-white border-t border-">
    <div className="flex items-center space-x-4">
    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/>
    </svg>
    </button>

    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/>
    </svg>
    </button>

    <div className="flex-1 relative">
    <div className="flex items-center bg-rounded-full px-4 py-3">
    <button className="p-1 rounded-full hover:bg-mr-3">
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/>
    </svg>
    </button>

    <input
    type="text"
    value={newMessage}
    onChange={=> setNewMessage}
    onKeyPress={handleKeyPress}
    placeholder="Type a message..."
    className="flex-1 bg-transparent outline-none text-placeholder-"
    />

    <button
    onClick={handleSendMessage}
    className="p-1 rounded-full hover:bg-ml-3"
    >
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/>
    </svg>
    </button>
    </div>
    </div>
    </div>
    </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code.
    Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc.
    Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating.
    Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list
    A CSS style guide
    A frameworkstyle guide
    Test suite rules
    Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right.
    There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority.
    It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent.
    Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    #design #code #with #figma #mcp
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in CursorSet up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args":} } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <; Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-mr-3"> <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={=> setNewMessage} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-placeholder-" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-ml-3" > <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A frameworkstyle guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering! #design #code #with #figma #mcp
    WWW.BUILDER.IO
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocol (MCP) servers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in Cursor (or your client of choice) Set up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP client (Cursor)Now that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args": ["-y", "figma-developer-mcp", "--figma-api-key=YOUR_FIGMA_ACCESS_TOKEN", "--stdio"] } } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layer(s) in Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <https://www.figma.com/design/CPDcrzkVChAzQ3q1pC5mXd/Figma-MCP-vs.-Builder-Fusion?node-id=2-215&t=K6v805pKyoU4FqdA-4> Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-[#E8DEF8]"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-[#ECE6F0] rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-[#D0BCFF] mr-3"> <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={(e) => setNewMessage(e.target.value)} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-[#1D1B20] placeholder-[#4A4459]" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-[#D0BCFF] ml-3" > <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files (with AI's help), and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A framework (i.e., React) style guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PR (with minimal diffs) for you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    0 Comments 0 Shares 0 Reviews
  • 20 of the Best TV Shows on Prime Video

    We may earn a commission from links on this page.Like shopping on Amazon itself, Prime Video can sometimes feel like a jumble sale: a proliferation of TV and movies from every era, none of it terribly well-curated. There’s a lot to sort through, and the choices can be a little overwhelming. Presentation issues aside, there are some real gems to be found, as long as you’re willing to dig a bit—the streamer offers more than a few impressive exclusives, though they sometimes get lost amid the noise. Here are 20 of the best TV series Prime Video has to offer, including both ongoing and concluded shows.OvercompensatingComedian Benito Skinner plays himself, sort of, in this buzzy comedy that sees a former high school jock facing his freshman year in college, desperately trying to convince himself and everyone else that he's as straight as they come. Much of the show's appeal is in its deft blending of tones: It's a frequently raunchy college comedy, but it's simultaneously a sweet coming-of-age story about accepting yourself without worrying about what everyone else thinks. The impressive cast includes Adam DiMarcoand Rish ShahYou can stream Overcompensating here. ÉtoileAmy Sherman-Palladino and David Palladinoare back on TV and back in the dance worldwith this series about two world-renowned ballet companiesthat decide to spice things up by swapping their most talented dancers. Each company is on the brink of financial disaster, and so Jack McMillan, director of the Metropolitan Ballet, and Geneviève Lavigne, director of of Le Ballet National, come up with the plan, and recruit an eccentric billionaireto pay for it. Much of the comedy comes from the mismatched natures of their swapped dancers, and there's a tangible love of ballet that keeps things light, despite the fancy title. You can stream Étoile here.FalloutA shockingly effective video game adaptation, Fallout does post-apocalyptic TV with a lot more color and vibrancy than can typically be ascribed to the genre. The setup is a little complicated, but not belabored in the show itself: It's 2296 on an Earth devastated two centuries earlier by a nuclear war between the United States and China, exacerbated by conflicts between capitalists and so-called communists. Lucy MacLeanemerges from the underground Vault where she's lived her whole life protected from the presumed ravages of the world above, hoping to find her missing father, who was kidnapped by raiders. The aboveground wasteland is dominated by various factions, each of which considers the others dangerous cults, and believes that they alone know mankind's way forward. It's also overrun by Ghouls, Gulpers, and other wild radiation monsters. Through all of this, Lucy remains just about the only human with any belief in humanity, or any desire to make things better. You can stream Fallout here.DeadlochBoth an excellent crime procedural and an effective satire of the genre, this Australian import does about as well as setting up its central mystery as Broadchurch and its manyimitators. Kate Box stars as Dulcie Collins, fastidious senior sergeant of the police force in the fictional town of the title. When a body turns up dead on the beach, Dulcie is joined by Madeleine Sami's Eddie Redcliffe, a crude and generally obnoxious detective brought in to help solve the case. Unraveling the web of secrets and mysteries in the tiny Tasmanian town is appropriately addictive, with the added bonus of cop thriller tropes getting mercilessly mocked all the way. You can stream Deadlock here.The Lord of the Rings: The Rings of PowerAll the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfootwith a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.ReacherGetting high marks for his portrayal of the Lee Childs’ characteris Alan Ritchson, playing Reacher with an appropriately commanding physical presence. The first season finds the former U.S. Army military policeman visiting the rural town of Margrave, Georgia...where he’s quickly arrested for murder. His attempts to clear his name find him caught up in a complex conspiracy involving the town’s very corrupt police force, as well as shady local businessmen and politicians. Subsequent seasons find our ripped drifter reconnecting with members of his old army special-investigations unit, including Frances Neagley, who's getting her own spin-off. You can stream Reacher here. The BondsmanIt's tempting not to include The Bondsman among Prime's best, given that it's representative of an increasingly obnoxious trend: shows that get cancelled before they ever really got a chance. This Kevin Bacon-led action horror thriller did well with critics and on the streaming charts, and it's had a consistent spot among Prime's top ten streaming shows, but it got the pink slip anyway. Nevertheless, what we did get is a lot of fun: Bacon plays Hub Halloran, a bounty hunter who dies on the job only to discover that he's been resurrected by the literal devil, for whom he now works. It comes to a moderately satisfying conclusion, despite the cancellation. You can stream The Bondsman here. The Lord of the Rings: The Rings of PowerAll the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfootwith a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.The ExpanseA pick-up from the SyFy channel after that network all but got out of the original series business, The Expanse started good and only got better with each succeeding season. Starring Steven Strait, Shohreh Aghdashloo, and Dominique Tipper among a sizable ensemble, the show takes place in a near-ish future in which we’ve spread out into the solar system, while largely taking all of the usual political bullshit and conflicts with us. A salvage crew comes upon an alien microorganism with the potential to upend pretty much everything, if humanity can stop fighting over scraps long enough to make it matter. The show brings a sense of gritty realism to TV sci-fi, without entirely sacrificing optimism—or, at least, the idea that well-intentioned individuals can make a difference. You can stream The Expanse here. Mr. & Mrs. SmithOne-upping the Brad Pitt/Angelina Jolie movie on which it's based, Mr. & Mrs. Smith stars Donald Glover and Maya Erskine as a couple of spies tasked to pose as a married couple while coordinatingon missions. Smartly, each episode takes on a standalone mission in a different location, while complicating the relationship between the two and gradually upping the stakes until the season finale, which sees them pitted against each other. The show is returning for season two, though it's unclear if Glover and Erskine will be returning, or if we'll be getting a new Mr. & Mrs. You can stream Mr. & Mrs. Smith here. Good OmensMichael Sheen and David Tennant are delightful as, respectively, the hopelessly naive angel Aziraphale and the demon Crowley, wandering the Earth for millennia and determined not to let the perpetual conflict between their two sides get in the way of their mismatched friendship. In the show’s world, from the 1990 novel by Neil Gaiman and Terry Pratchett, heaven and hell are are less representative of good and evil than hidebound bureaucracies, more interested in scoring points on each other than in doing anything useful for anyone down here. It’s got a sly, quirky, sometimes goofy sense of humor, even while it asks some big questions about who should get to decide what’s right and what’s wrong. Following some depressingly gross revelations about writer and showrunner Gaiman, it was announced that he'd be off the production and the third season would be reduced to a movie-length conclusion, date tbd. You can stream Good Omens here. The Marvelous Mrs. MaiselMrs. Maisel was one of Prime’s first and buzziest original series, a comedy-drama from Amy Sherman-Palladinoabout the title’s Midge Maisel, a New York housewife of the late 1950s who discovers a talent for stand-up comedy. Inspired by the real-life careers of comedians like Totie Fields and Joan Rivers, the show is both warm and funny, with great performances and dialogue; it also achieves something rare in being a show about comedy that’s actually funny. You can stream Mrs. Maisel here. The BoysThere’s a lot of superhero stuff out there, no question, but, as there was no series quite like the Garth Ennis and Darick Robertson comic book on which this show is based, there’s nothing else quite like The Boys. The very dark satire imagines a world in which superheroes are big with the public, but whose powers don’t make them any better than the average jerk. When his girlfriend is gruesomely killed by a superhero who couldn’t really care less, Wee Hughieis recruited by the title agency. Led by Billy Butcher, the Boys watch over the world’s superpowered individuals, putting them down when necessary and possible. A concluding fifth season is on the way, as is a second season of the live-action spin-off. An animated miniseriescame out in 2022. The Man in the High CastleFrom a novel by Philip K. Dick, The Man in the High Castle takes place in an alternate history in which the Axis powers won World War II, and in which the United States is split down the middle; Japan governing the west and Germany the east. The title’s man in the high castle offers an alternate view, though, one in which the Allies actually won, with the potential to rally opposition to the Axis rulers. As the show progresses through its four seasons, the parallels to our increasingly authoritarian-friendly world, making it one of the more relevant shows of recent years. You can stream The Man in the High Castle here. The Wheel of TimeAn effective bit of fantasy storytelling, The Wheel of Time sees five people taken from a secluded village by Moiraine Damodred, a powerful magic user who believes that one of them is the reborn Dragon: a being who will either heal the world, or destroy it entirely. The show has an epic sweep while smartly focusing on the very unworldly villagers, experiencing much of this at the same time as the audience. This is another mixed recommendation in that, while the show itself is quite good, it has just been cancelled following a third season that saw it really getting into its groove. The show goes through the fourth and fifth books of Robert Jordan's fantasy series, so, I suppose, you can always jump into the novels to finish the story. You can stream Wheel of Time here. The Devil’s HourJessica Rainejoins Peter Capaldifor a slightly convoluted but haunting series that throws in just about every horror trope that you can think of while still managing to ground things in the two lead performances. Raine plays a social worker whose life is coming apart on almost every level: She’s caring for her aging mother, her marriage is ending, her son is withdrawn, and she wakes up at 3:33 am every morning exactly. She’s as convincing in the role as Capaldi is absolutely terrifying as a criminal linked to at least one killing who knows a lot more than he makes clear. You can stream The Devil's Hour here. Batman: Caped CrusaderI know, there's a lot of Batman out there. But this one's got real style, harkening back to Batman: The Animated Series from the 1990s. With a 1940s-esque setting, the show dodges some of the more outlandish superhero tropes to instead focus on a Gotham City rife with crime, corrupt cops, and gang warfare. There's just enough serialization across the first season to keep things addictive. You can stream Caped Crusader here. Secret LevelThis is pretty fun: an anthology of animated shorts from various creative teams that tell stories set within the worlds of variousvideo games, including Unreal, Warhammer, Sifu, Mega Man, and Honor of Kings. It's hard to find consistent threads given the variety of source material, but that's kinda the point: There's a little something for everyone, and most shorts don't demand any extensive knowledge of game lore—though, naturally, they're a bit more fun for the initiated. The voice cast includes the likes of Arnold Schwarzenegger, his son Patrick Schwarzenegger, Keanu Reeves, Gabriel Luna, Ariana Greenblatt, and Adewale Akinnuoye-Agbaje. You can stream Secret Level here. CrossJames Patterson's Alex Cross novels have been adapted three times before, all with mixed results: Morgan Freeman played the character twice, and Tyler Perry took on the role in 2012. Here, the forensic psychologist/police detective of a few dozen novels is played by Aldis Hodge, and it feels like he's finally nailed it. There are plenty of cop-drama tropes at work here, but the series is fast-paced and intense, and Hodge is instantly compelling in the iconic lead role. You can stream Cross here. FleabagFleabag isn’t a Prime original per se, nor even a co-production, but Amazon is the show’s American distributor and still brands it as such, so we’re going to count it. There’s no quick synopsis here, but stars Phoebe Waller-Bridge as the title characterin the comedy drama about a free-spirited, but also deeply angry single woman in living in London. Waller-Bridge won separate Emmys as the star, creator, and writer of the series, and co-stars Sian Clifford, Olivia Coleman, Fiona Shaw, and Kristin Scott Thomas all received well-deserved nominations. You can stream Fleabag here.
    #best #shows #prime #video
    20 of the Best TV Shows on Prime Video
    We may earn a commission from links on this page.Like shopping on Amazon itself, Prime Video can sometimes feel like a jumble sale: a proliferation of TV and movies from every era, none of it terribly well-curated. There’s a lot to sort through, and the choices can be a little overwhelming. Presentation issues aside, there are some real gems to be found, as long as you’re willing to dig a bit—the streamer offers more than a few impressive exclusives, though they sometimes get lost amid the noise. Here are 20 of the best TV series Prime Video has to offer, including both ongoing and concluded shows.OvercompensatingComedian Benito Skinner plays himself, sort of, in this buzzy comedy that sees a former high school jock facing his freshman year in college, desperately trying to convince himself and everyone else that he's as straight as they come. Much of the show's appeal is in its deft blending of tones: It's a frequently raunchy college comedy, but it's simultaneously a sweet coming-of-age story about accepting yourself without worrying about what everyone else thinks. The impressive cast includes Adam DiMarcoand Rish ShahYou can stream Overcompensating here. ÉtoileAmy Sherman-Palladino and David Palladinoare back on TV and back in the dance worldwith this series about two world-renowned ballet companiesthat decide to spice things up by swapping their most talented dancers. Each company is on the brink of financial disaster, and so Jack McMillan, director of the Metropolitan Ballet, and Geneviève Lavigne, director of of Le Ballet National, come up with the plan, and recruit an eccentric billionaireto pay for it. Much of the comedy comes from the mismatched natures of their swapped dancers, and there's a tangible love of ballet that keeps things light, despite the fancy title. You can stream Étoile here.FalloutA shockingly effective video game adaptation, Fallout does post-apocalyptic TV with a lot more color and vibrancy than can typically be ascribed to the genre. The setup is a little complicated, but not belabored in the show itself: It's 2296 on an Earth devastated two centuries earlier by a nuclear war between the United States and China, exacerbated by conflicts between capitalists and so-called communists. Lucy MacLeanemerges from the underground Vault where she's lived her whole life protected from the presumed ravages of the world above, hoping to find her missing father, who was kidnapped by raiders. The aboveground wasteland is dominated by various factions, each of which considers the others dangerous cults, and believes that they alone know mankind's way forward. It's also overrun by Ghouls, Gulpers, and other wild radiation monsters. Through all of this, Lucy remains just about the only human with any belief in humanity, or any desire to make things better. You can stream Fallout here.DeadlochBoth an excellent crime procedural and an effective satire of the genre, this Australian import does about as well as setting up its central mystery as Broadchurch and its manyimitators. Kate Box stars as Dulcie Collins, fastidious senior sergeant of the police force in the fictional town of the title. When a body turns up dead on the beach, Dulcie is joined by Madeleine Sami's Eddie Redcliffe, a crude and generally obnoxious detective brought in to help solve the case. Unraveling the web of secrets and mysteries in the tiny Tasmanian town is appropriately addictive, with the added bonus of cop thriller tropes getting mercilessly mocked all the way. You can stream Deadlock here.The Lord of the Rings: The Rings of PowerAll the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfootwith a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.ReacherGetting high marks for his portrayal of the Lee Childs’ characteris Alan Ritchson, playing Reacher with an appropriately commanding physical presence. The first season finds the former U.S. Army military policeman visiting the rural town of Margrave, Georgia...where he’s quickly arrested for murder. His attempts to clear his name find him caught up in a complex conspiracy involving the town’s very corrupt police force, as well as shady local businessmen and politicians. Subsequent seasons find our ripped drifter reconnecting with members of his old army special-investigations unit, including Frances Neagley, who's getting her own spin-off. You can stream Reacher here. The BondsmanIt's tempting not to include The Bondsman among Prime's best, given that it's representative of an increasingly obnoxious trend: shows that get cancelled before they ever really got a chance. This Kevin Bacon-led action horror thriller did well with critics and on the streaming charts, and it's had a consistent spot among Prime's top ten streaming shows, but it got the pink slip anyway. Nevertheless, what we did get is a lot of fun: Bacon plays Hub Halloran, a bounty hunter who dies on the job only to discover that he's been resurrected by the literal devil, for whom he now works. It comes to a moderately satisfying conclusion, despite the cancellation. You can stream The Bondsman here. The Lord of the Rings: The Rings of PowerAll the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfootwith a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.The ExpanseA pick-up from the SyFy channel after that network all but got out of the original series business, The Expanse started good and only got better with each succeeding season. Starring Steven Strait, Shohreh Aghdashloo, and Dominique Tipper among a sizable ensemble, the show takes place in a near-ish future in which we’ve spread out into the solar system, while largely taking all of the usual political bullshit and conflicts with us. A salvage crew comes upon an alien microorganism with the potential to upend pretty much everything, if humanity can stop fighting over scraps long enough to make it matter. The show brings a sense of gritty realism to TV sci-fi, without entirely sacrificing optimism—or, at least, the idea that well-intentioned individuals can make a difference. You can stream The Expanse here. Mr. & Mrs. SmithOne-upping the Brad Pitt/Angelina Jolie movie on which it's based, Mr. & Mrs. Smith stars Donald Glover and Maya Erskine as a couple of spies tasked to pose as a married couple while coordinatingon missions. Smartly, each episode takes on a standalone mission in a different location, while complicating the relationship between the two and gradually upping the stakes until the season finale, which sees them pitted against each other. The show is returning for season two, though it's unclear if Glover and Erskine will be returning, or if we'll be getting a new Mr. & Mrs. You can stream Mr. & Mrs. Smith here. Good OmensMichael Sheen and David Tennant are delightful as, respectively, the hopelessly naive angel Aziraphale and the demon Crowley, wandering the Earth for millennia and determined not to let the perpetual conflict between their two sides get in the way of their mismatched friendship. In the show’s world, from the 1990 novel by Neil Gaiman and Terry Pratchett, heaven and hell are are less representative of good and evil than hidebound bureaucracies, more interested in scoring points on each other than in doing anything useful for anyone down here. It’s got a sly, quirky, sometimes goofy sense of humor, even while it asks some big questions about who should get to decide what’s right and what’s wrong. Following some depressingly gross revelations about writer and showrunner Gaiman, it was announced that he'd be off the production and the third season would be reduced to a movie-length conclusion, date tbd. You can stream Good Omens here. The Marvelous Mrs. MaiselMrs. Maisel was one of Prime’s first and buzziest original series, a comedy-drama from Amy Sherman-Palladinoabout the title’s Midge Maisel, a New York housewife of the late 1950s who discovers a talent for stand-up comedy. Inspired by the real-life careers of comedians like Totie Fields and Joan Rivers, the show is both warm and funny, with great performances and dialogue; it also achieves something rare in being a show about comedy that’s actually funny. You can stream Mrs. Maisel here. The BoysThere’s a lot of superhero stuff out there, no question, but, as there was no series quite like the Garth Ennis and Darick Robertson comic book on which this show is based, there’s nothing else quite like The Boys. The very dark satire imagines a world in which superheroes are big with the public, but whose powers don’t make them any better than the average jerk. When his girlfriend is gruesomely killed by a superhero who couldn’t really care less, Wee Hughieis recruited by the title agency. Led by Billy Butcher, the Boys watch over the world’s superpowered individuals, putting them down when necessary and possible. A concluding fifth season is on the way, as is a second season of the live-action spin-off. An animated miniseriescame out in 2022. The Man in the High CastleFrom a novel by Philip K. Dick, The Man in the High Castle takes place in an alternate history in which the Axis powers won World War II, and in which the United States is split down the middle; Japan governing the west and Germany the east. The title’s man in the high castle offers an alternate view, though, one in which the Allies actually won, with the potential to rally opposition to the Axis rulers. As the show progresses through its four seasons, the parallels to our increasingly authoritarian-friendly world, making it one of the more relevant shows of recent years. You can stream The Man in the High Castle here. The Wheel of TimeAn effective bit of fantasy storytelling, The Wheel of Time sees five people taken from a secluded village by Moiraine Damodred, a powerful magic user who believes that one of them is the reborn Dragon: a being who will either heal the world, or destroy it entirely. The show has an epic sweep while smartly focusing on the very unworldly villagers, experiencing much of this at the same time as the audience. This is another mixed recommendation in that, while the show itself is quite good, it has just been cancelled following a third season that saw it really getting into its groove. The show goes through the fourth and fifth books of Robert Jordan's fantasy series, so, I suppose, you can always jump into the novels to finish the story. You can stream Wheel of Time here. The Devil’s HourJessica Rainejoins Peter Capaldifor a slightly convoluted but haunting series that throws in just about every horror trope that you can think of while still managing to ground things in the two lead performances. Raine plays a social worker whose life is coming apart on almost every level: She’s caring for her aging mother, her marriage is ending, her son is withdrawn, and she wakes up at 3:33 am every morning exactly. She’s as convincing in the role as Capaldi is absolutely terrifying as a criminal linked to at least one killing who knows a lot more than he makes clear. You can stream The Devil's Hour here. Batman: Caped CrusaderI know, there's a lot of Batman out there. But this one's got real style, harkening back to Batman: The Animated Series from the 1990s. With a 1940s-esque setting, the show dodges some of the more outlandish superhero tropes to instead focus on a Gotham City rife with crime, corrupt cops, and gang warfare. There's just enough serialization across the first season to keep things addictive. You can stream Caped Crusader here. Secret LevelThis is pretty fun: an anthology of animated shorts from various creative teams that tell stories set within the worlds of variousvideo games, including Unreal, Warhammer, Sifu, Mega Man, and Honor of Kings. It's hard to find consistent threads given the variety of source material, but that's kinda the point: There's a little something for everyone, and most shorts don't demand any extensive knowledge of game lore—though, naturally, they're a bit more fun for the initiated. The voice cast includes the likes of Arnold Schwarzenegger, his son Patrick Schwarzenegger, Keanu Reeves, Gabriel Luna, Ariana Greenblatt, and Adewale Akinnuoye-Agbaje. You can stream Secret Level here. CrossJames Patterson's Alex Cross novels have been adapted three times before, all with mixed results: Morgan Freeman played the character twice, and Tyler Perry took on the role in 2012. Here, the forensic psychologist/police detective of a few dozen novels is played by Aldis Hodge, and it feels like he's finally nailed it. There are plenty of cop-drama tropes at work here, but the series is fast-paced and intense, and Hodge is instantly compelling in the iconic lead role. You can stream Cross here. FleabagFleabag isn’t a Prime original per se, nor even a co-production, but Amazon is the show’s American distributor and still brands it as such, so we’re going to count it. There’s no quick synopsis here, but stars Phoebe Waller-Bridge as the title characterin the comedy drama about a free-spirited, but also deeply angry single woman in living in London. Waller-Bridge won separate Emmys as the star, creator, and writer of the series, and co-stars Sian Clifford, Olivia Coleman, Fiona Shaw, and Kristin Scott Thomas all received well-deserved nominations. You can stream Fleabag here. #best #shows #prime #video
    LIFEHACKER.COM
    20 of the Best TV Shows on Prime Video
    We may earn a commission from links on this page.Like shopping on Amazon itself, Prime Video can sometimes feel like a jumble sale: a proliferation of TV and movies from every era, none of it terribly well-curated. There’s a lot to sort through, and the choices can be a little overwhelming. Presentation issues aside, there are some real gems to be found, as long as you’re willing to dig a bit—the streamer offers more than a few impressive exclusives, though they sometimes get lost amid the noise. Here are 20 of the best TV series Prime Video has to offer, including both ongoing and concluded shows.Overcompensating (2025 – ) Comedian Benito Skinner plays himself, sort of, in this buzzy comedy that sees a former high school jock facing his freshman year in college, desperately trying to convince himself and everyone else that he's as straight as they come (relatable, except for the jock part). Much of the show's appeal is in its deft blending of tones: It's a frequently raunchy college comedy, but it's simultaneously a sweet coming-of-age story about accepting yourself without worrying about what everyone else thinks. The impressive cast includes Adam DiMarco (The White Lotus) and Rish Shah (Ms. Marvel) You can stream Overcompensating here. Étoile (2025 –, renewed for season two) Amy Sherman-Palladino and David Palladino (Gilmore Girls, The Marvelous Mrs. Maisel) are back on TV and back in the dance world (following Bunheads) with this series about two world-renowned ballet companies (one in NYC and one in Paris) that decide to spice things up by swapping their most talented dancers. Each company is on the brink of financial disaster, and so Jack McMillan (Luke Kirby), director of the Metropolitan Ballet, and Geneviève Lavigne (Charlotte Gainsbourg), director of of Le Ballet National, come up with the plan, and recruit an eccentric billionaire (Simon Callow) to pay for it. Much of the comedy comes from the mismatched natures of their swapped dancers, and there's a tangible love of ballet that keeps things light, despite the fancy title. You can stream Étoile here.Fallout (2024 – , renewed for second and third seasons) A shockingly effective video game adaptation, Fallout does post-apocalyptic TV with a lot more color and vibrancy than can typically be ascribed to the genre (in the world of Fallout, the aesthetic of the 1950s hung on for a lot longer than it did in ours). The setup is a little complicated, but not belabored in the show itself: It's 2296 on an Earth devastated two centuries earlier by a nuclear war between the United States and China, exacerbated by conflicts between capitalists and so-called communists. Lucy MacLean (Ella Purnell) emerges from the underground Vault where she's lived her whole life protected from the presumed ravages of the world above, hoping to find her missing father, who was kidnapped by raiders. The aboveground wasteland is dominated by various factions, each of which considers the others dangerous cults, and believes that they alone know mankind's way forward. It's also overrun by Ghouls, Gulpers, and other wild radiation monsters. Through all of this, Lucy remains just about the only human with any belief in humanity, or any desire to make things better. You can stream Fallout here.Deadloch (2023 –, renewed for a second season) Both an excellent crime procedural and an effective satire of the genre, this Australian import does about as well as setting up its central mystery as Broadchurch and its many (many) imitators. Kate Box stars as Dulcie Collins, fastidious senior sergeant of the police force in the fictional town of the title. When a body turns up dead on the beach, Dulcie is joined by Madeleine Sami's Eddie Redcliffe, a crude and generally obnoxious detective brought in to help solve the case. Unraveling the web of secrets and mysteries in the tiny Tasmanian town is appropriately addictive, with the added bonus of cop thriller tropes getting mercilessly mocked all the way. You can stream Deadlock here.The Lord of the Rings: The Rings of Power (2022 – , third season coming) All the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfoot (the people we’ll much later know as Hobbits) with a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.Reacher (2022 – , fourth season coming) Getting high marks for his portrayal of the Lee Childs’ character (from both book and TV fans) is Alan Ritchson (Titans), playing Reacher with an appropriately commanding physical presence. The first season finds the former U.S. Army military policeman visiting the rural town of Margrave, Georgia...where he’s quickly arrested for murder. His attempts to clear his name find him caught up in a complex conspiracy involving the town’s very corrupt police force, as well as shady local businessmen and politicians. Subsequent seasons find our ripped drifter reconnecting with members of his old army special-investigations unit, including Frances Neagley (Maria Stan), who's getting her own spin-off. You can stream Reacher here. The Bondsman (2025, one season) It's tempting not to include The Bondsman among Prime's best, given that it's representative of an increasingly obnoxious trend: shows that get cancelled before they ever really got a chance. This Kevin Bacon-led action horror thriller did well with critics and on the streaming charts, and it's had a consistent spot among Prime's top ten streaming shows, but it got the pink slip anyway. Nevertheless, what we did get is a lot of fun: Bacon plays Hub Halloran, a bounty hunter who dies on the job only to discover that he's been resurrected by the literal devil, for whom he now works. It comes to a moderately satisfying conclusion, despite the cancellation. You can stream The Bondsman here. The Lord of the Rings: The Rings of Power (2022 – , third season coming) All the talk around The Rings of Power in the lead-up to the series had to do with the cost of the planned five seasons expected to be somewhere in the billion dollar range. At that price point, it’s tempting to expect a debacle—but the resulting series is actually quite good, blending epic conflict with more grounded characters in a manner that evokes both Tolkien, and Peter Jackson’s Lord of the Rings films. Set thousands of years before those tales, the series follows an ensemble cast lead by Morfydd Clark as Elven outcast Galadriel and, at the other end of the spectrum, Markella Kavenagh as Nori, a Harfoot (the people we’ll much later know as Hobbits) with a yearning for adventure who finds herself caught up in the larger struggles of a world about to see the rise of the Dark Lord Sauron, the fall of the idyllic island kingdom of Númenor, and the the last alliance of Elves and humans. You can stream The Rings of Power here.The Expanse (2015 – 2022, six seasons) A pick-up from the SyFy channel after that network all but got out of the original series business, The Expanse started good and only got better with each succeeding season. Starring Steven Strait, Shohreh Aghdashloo, and Dominique Tipper among a sizable ensemble, the show takes place in a near-ish future in which we’ve spread out into the solar system, while largely taking all of the usual political bullshit and conflicts with us. A salvage crew comes upon an alien microorganism with the potential to upend pretty much everything, if humanity can stop fighting over scraps long enough to make it matter. The show brings a sense of gritty realism to TV sci-fi, without entirely sacrificing optimism—or, at least, the idea that well-intentioned individuals can make a difference. You can stream The Expanse here. Mr. & Mrs. Smith (2024 – , renewed for a second season) One-upping the Brad Pitt/Angelina Jolie movie on which it's based, Mr. & Mrs. Smith stars Donald Glover and Maya Erskine as a couple of spies tasked to pose as a married couple while coordinating (and sometimes competing against one another) on missions. Smartly, each episode takes on a standalone mission in a different location, while complicating the relationship between the two and gradually upping the stakes until the season finale, which sees them pitted against each other. The show is returning for season two, though it's unclear if Glover and Erskine will be returning, or if we'll be getting a new Mr. & Mrs. You can stream Mr. & Mrs. Smith here. Good Omens (2019– , conclusion coming) Michael Sheen and David Tennant are delightful as, respectively, the hopelessly naive angel Aziraphale and the demon Crowley, wandering the Earth for millennia and determined not to let the perpetual conflict between their two sides get in the way of their mismatched friendship. In the show’s world, from the 1990 novel by Neil Gaiman and Terry Pratchett, heaven and hell are are less representative of good and evil than hidebound bureaucracies, more interested in scoring points on each other than in doing anything useful for anyone down here. It’s got a sly, quirky, sometimes goofy sense of humor, even while it asks some big questions about who should get to decide what’s right and what’s wrong. Following some depressingly gross revelations about writer and showrunner Gaiman, it was announced that he'd be off the production and the third season would be reduced to a movie-length conclusion, date tbd. You can stream Good Omens here. The Marvelous Mrs. Maisel (2017 – 2023, five seasons) Mrs. Maisel was one of Prime’s first and buzziest original series, a comedy-drama from Amy Sherman-Palladino (Gilmore Girls) about the title’s Midge Maisel (Rachel Brosnahan), a New York housewife of the late 1950s who discovers a talent for stand-up comedy. Inspired by the real-life careers of comedians like Totie Fields and Joan Rivers, the show is both warm and funny, with great performances and dialogue; it also achieves something rare in being a show about comedy that’s actually funny. You can stream Mrs. Maisel here. The Boys (2019 – , fifth and final season coming) There’s a lot of superhero stuff out there, no question, but, as there was no series quite like the Garth Ennis and Darick Robertson comic book on which this show is based, there’s nothing else quite like The Boys. The very dark satire imagines a world in which superheroes are big with the public, but whose powers don’t make them any better than the average jerk. When his girlfriend is gruesomely killed by a superhero who couldn’t really care less (collateral damage, ya know), Wee Hughie (Jack Quaid) is recruited by the title agency. Led by Billy Butcher (Karl Urban), the Boys watch over the world’s superpowered individuals, putting them down when necessary and possible. A concluding fifth season is on the way, as is a second season of the live-action spin-off (Gen V). An animated miniseries (Diabolical) came out in 2022. The Man in the High Castle (2015–2019, four seasons) From a novel by Philip K. Dick (whose work has been the basis for Blade Runner, Total Recall, Minority Report, A Scanner Darkly, among many others), The Man in the High Castle takes place in an alternate history in which the Axis powers won World War II, and in which the United States is split down the middle; Japan governing the west and Germany the east. The title’s man in the high castle offers an alternate view, though, one in which the Allies actually won, with the potential to rally opposition to the Axis rulers. As the show progresses through its four seasons, the parallels to our increasingly authoritarian-friendly world, making it one of the more relevant shows of recent years. You can stream The Man in the High Castle here. The Wheel of Time (2021 – 2025, three seasons) An effective bit of fantasy storytelling, The Wheel of Time sees five people taken from a secluded village by Moiraine Damodred (Rosamund Pike), a powerful magic user who believes that one of them is the reborn Dragon: a being who will either heal the world, or destroy it entirely. The show has an epic sweep while smartly focusing on the very unworldly villagers, experiencing much of this at the same time as the audience. This is another mixed recommendation in that, while the show itself is quite good, it has just been cancelled following a third season that saw it really getting into its groove. The show goes through the fourth and fifth books of Robert Jordan's fantasy series, so, I suppose, you can always jump into the novels to finish the story. You can stream Wheel of Time here. The Devil’s Hour (2022 – , renewed for a third season) Jessica Raine (Call the Midwife) joins Peter Capaldi (The Thick of It, Doctor Who) for a slightly convoluted but haunting series that throws in just about every horror trope that you can think of while still managing to ground things in the two lead performances. Raine plays a social worker whose life is coming apart on almost every level: She’s caring for her aging mother, her marriage is ending, her son is withdrawn, and she wakes up at 3:33 am every morning exactly. She’s as convincing in the role as Capaldi is absolutely terrifying as a criminal linked to at least one killing who knows a lot more than he makes clear. You can stream The Devil's Hour here. Batman: Caped Crusader (2024 – , second season coming) I know, there's a lot of Batman out there. But this one's got real style, harkening back to Batman: The Animated Series from the 1990s (no surprise, given that Bruce Timm developed this one too). With a 1940s-esque setting, the show dodges some of the more outlandish superhero tropes to instead focus on a Gotham City rife with crime, corrupt cops, and gang warfare. There's just enough serialization across the first season to keep things addictive. You can stream Caped Crusader here. Secret Level (2024 – , renewed for a second season) This is pretty fun: an anthology of animated shorts from various creative teams that tell stories set within the worlds of various (15 so far) video games, including Unreal, Warhammer, Sifu, Mega Man, and Honor of Kings. It's hard to find consistent threads given the variety of source material, but that's kinda the point: There's a little something for everyone, and most shorts don't demand any extensive knowledge of game lore—though, naturally, they're a bit more fun for the initiated. The voice cast includes the likes of Arnold Schwarzenegger, his son Patrick Schwarzenegger, Keanu Reeves, Gabriel Luna, Ariana Greenblatt, and Adewale Akinnuoye-Agbaje. You can stream Secret Level here. Cross (2024 – , renewed for a second season) James Patterson's Alex Cross novels have been adapted three times before, all with mixed results: Morgan Freeman played the character twice, and Tyler Perry took on the role in 2012. Here, the forensic psychologist/police detective of a few dozen novels is played by Aldis Hodge (Leverage, One Night in Miami...), and it feels like he's finally nailed it. There are plenty of cop-drama tropes at work here, but the series is fast-paced and intense, and Hodge is instantly compelling in the iconic lead role. You can stream Cross here. Fleabag (2016–2019, two seasons) Fleabag isn’t a Prime original per se, nor even a co-production, but Amazon is the show’s American distributor and still brands it as such, so we’re going to count it. There’s no quick synopsis here, but stars Phoebe Waller-Bridge as the title character (only ever known as Fleabag) in the comedy drama about a free-spirited, but also deeply angry single woman in living in London. Waller-Bridge won separate Emmys as the star, creator, and writer of the series (all in the same year), and co-stars Sian Clifford, Olivia Coleman, Fiona Shaw, and Kristin Scott Thomas all received well-deserved nominations. You can stream Fleabag here.
    0 Comments 0 Shares 0 Reviews
  • F1 25 review – nailed-on realism, even when you drive the wrong way round

    Formula One aficionados are famously fanatical, but they still need a few good reasons to splash out on the annual instalment of the sport’s officially licensed game. Luckily F1 25 – crafted, as ever, in Birmingham by Codemasters – has many. There’s the return of Braking Point, the game’s story mode; a revamp of My Team, the most popular career mode; a tie-up with the forthcoming F1: The Movie; and perhaps most intriguing of all, the chance to race round three tracks in the reverse direction to normal.F1 25 feels like something of a culmination – last year’s F1 24, for example, introduced a new physics model which required tweaks after launch, but has now been thoroughly fettled, so F1 25’s essential building blocks of car handlingplus state-of-the-art graphicsare simply impeccable.Impeccable graphics … F1 25. Photograph: Electronic ArtsThis has freed the company to delve into the sort of fantasy elements that you can find in games but not real life. Chief among those is the aforementioned third instalment of Braking Point, which follows the fortunes of the fictional Konnersport team. Over 15 chapters it knits together a deliciously tortuous soap opera-style storyline with some cleverly varied on-track action.More fundamentally, the most popular of the career modes – My Team, which ramps up the management element by casting you as the owner of a new team – has received the bulk of Codemasters’ attentions. This time around, you stay in your corporate lane and drive instead as either of the two drivers you’ve hired, which makes much more sense than previously. As does separating research and development, meaning you must allocate new parts to specific drivers. Further effective tweaks render My Team 2.0, as Codemasters calls it, much more convincing and realistic.As ever, you can jump online, against various standards of opposition, or on to individual tracks, or play split-screen against a friend. But there’s a new mode called Challenge Career, which lets you play timed scenarios offline, then post them to a global leaderboard. It’s a nice idea, designed to take you out of your driver-aids comfort zone, but the scenarios will only get going properly after launch, so the jury remains out on its merits. A number of scenarios from F1: The Movie will also be delivered as post-launch episodes, but it’s pretty cool to be able to step into a Formula One car as Brad Pitt playing a fictional racer.For diehard Formula One fans, though, the chance to race around Silverstone, Zandvoort and Austria’s Red Bull Ring in the wrong directionmight just be the clincher. Reversing the tracks’ direction completely changes their nature in a deliciously intriguing manner.With a real-life rule-change next year due to change the cars radically, Formula One currently feels like it’s at a generational peak, and F1 25 is so brilliantly crafted and full of elements that generate an irresistible mix of nailed-on realism and fantasy that it, too, feels like the culmination of a generation of officially licensed Formula One games. F1 25? Peak F1.
    #review #nailedon #realism #even #when
    F1 25 review – nailed-on realism, even when you drive the wrong way round
    Formula One aficionados are famously fanatical, but they still need a few good reasons to splash out on the annual instalment of the sport’s officially licensed game. Luckily F1 25 – crafted, as ever, in Birmingham by Codemasters – has many. There’s the return of Braking Point, the game’s story mode; a revamp of My Team, the most popular career mode; a tie-up with the forthcoming F1: The Movie; and perhaps most intriguing of all, the chance to race round three tracks in the reverse direction to normal.F1 25 feels like something of a culmination – last year’s F1 24, for example, introduced a new physics model which required tweaks after launch, but has now been thoroughly fettled, so F1 25’s essential building blocks of car handlingplus state-of-the-art graphicsare simply impeccable.Impeccable graphics … F1 25. Photograph: Electronic ArtsThis has freed the company to delve into the sort of fantasy elements that you can find in games but not real life. Chief among those is the aforementioned third instalment of Braking Point, which follows the fortunes of the fictional Konnersport team. Over 15 chapters it knits together a deliciously tortuous soap opera-style storyline with some cleverly varied on-track action.More fundamentally, the most popular of the career modes – My Team, which ramps up the management element by casting you as the owner of a new team – has received the bulk of Codemasters’ attentions. This time around, you stay in your corporate lane and drive instead as either of the two drivers you’ve hired, which makes much more sense than previously. As does separating research and development, meaning you must allocate new parts to specific drivers. Further effective tweaks render My Team 2.0, as Codemasters calls it, much more convincing and realistic.As ever, you can jump online, against various standards of opposition, or on to individual tracks, or play split-screen against a friend. But there’s a new mode called Challenge Career, which lets you play timed scenarios offline, then post them to a global leaderboard. It’s a nice idea, designed to take you out of your driver-aids comfort zone, but the scenarios will only get going properly after launch, so the jury remains out on its merits. A number of scenarios from F1: The Movie will also be delivered as post-launch episodes, but it’s pretty cool to be able to step into a Formula One car as Brad Pitt playing a fictional racer.For diehard Formula One fans, though, the chance to race around Silverstone, Zandvoort and Austria’s Red Bull Ring in the wrong directionmight just be the clincher. Reversing the tracks’ direction completely changes their nature in a deliciously intriguing manner.With a real-life rule-change next year due to change the cars radically, Formula One currently feels like it’s at a generational peak, and F1 25 is so brilliantly crafted and full of elements that generate an irresistible mix of nailed-on realism and fantasy that it, too, feels like the culmination of a generation of officially licensed Formula One games. F1 25? Peak F1. #review #nailedon #realism #even #when
    WWW.THEGUARDIAN.COM
    F1 25 review – nailed-on realism, even when you drive the wrong way round
    Formula One aficionados are famously fanatical, but they still need a few good reasons to splash out on the annual instalment of the sport’s officially licensed game. Luckily F1 25 – crafted, as ever, in Birmingham by Codemasters – has many. There’s the return of Braking Point, the game’s story mode; a revamp of My Team, the most popular career mode; a tie-up with the forthcoming F1: The Movie; and perhaps most intriguing of all, the chance to race round three tracks in the reverse direction to normal.F1 25 feels like something of a culmination – last year’s F1 24, for example, introduced a new physics model which required tweaks after launch, but has now been thoroughly fettled, so F1 25’s essential building blocks of car handling (and tyre wear) plus state-of-the-art graphics (this year, Codemasters has moved on from previous-gen consoles) are simply impeccable.Impeccable graphics … F1 25. Photograph: Electronic ArtsThis has freed the company to delve into the sort of fantasy elements that you can find in games but not real life. Chief among those is the aforementioned third instalment of Braking Point, which follows the fortunes of the fictional Konnersport team. Over 15 chapters it knits together a deliciously tortuous soap opera-style storyline with some cleverly varied on-track action.More fundamentally, the most popular of the career modes – My Team, which ramps up the management element by casting you as the owner of a new team – has received the bulk of Codemasters’ attentions. This time around, you stay in your corporate lane and drive instead as either of the two drivers you’ve hired, which makes much more sense than previously. As does separating research and development, meaning you must allocate new parts to specific drivers. Further effective tweaks render My Team 2.0, as Codemasters calls it, much more convincing and realistic.As ever, you can jump online, against various standards of opposition, or on to individual tracks, or play split-screen against a friend. But there’s a new mode called Challenge Career, which lets you play timed scenarios offline, then post them to a global leaderboard. It’s a nice idea, designed to take you out of your driver-aids comfort zone, but the scenarios will only get going properly after launch, so the jury remains out on its merits. A number of scenarios from F1: The Movie will also be delivered as post-launch episodes, but it’s pretty cool to be able to step into a Formula One car as Brad Pitt playing a fictional racer.For diehard Formula One fans, though, the chance to race around Silverstone, Zandvoort and Austria’s Red Bull Ring in the wrong direction (with the tracks remodelled to accommodate new pit lanes and the like) might just be the clincher. Reversing the tracks’ direction completely changes their nature in a deliciously intriguing manner.With a real-life rule-change next year due to change the cars radically, Formula One currently feels like it’s at a generational peak, and F1 25 is so brilliantly crafted and full of elements that generate an irresistible mix of nailed-on realism and fantasy that it, too, feels like the culmination of a generation of officially licensed Formula One games. F1 25? Peak F1.
    0 Comments 0 Shares 0 Reviews
  • Round Up: The Reviews Are In For Fantasy Life i: The Girl Who Steals Time

    Subscribe to Nintendo Life on YouTube814k
    Level-5's new game Fantasy Life i: The Girl Who Steals Time made its debut earlier this week and was generally well-received by fans. Now, in an update, the first reviews are beginning to surface online for the Switch version of the game and other platforms.
    IGN's review "in progress" notes how this new entry "seems to have nailed the balance between day-in-the-life cozy activities and more action-packed exploration to the point where it's really hard to predict what might happen next". It also mentions how the characters and story so far are both "wonderfully goofy and more substantial" - with the RPG-like Life system of levelling also "easy to get lost in".Subscribe to Nintendo Life on YouTube814kWatch on YouTube
    CGMagazine went hands-on with the PC version of the game awarding it a score of 9 out of 10 and calling it an improvement on the series in "every conceivable way" while also delivering one of the most "feature-rich games of the year".

    "Fantasy Life i: The Girl Who Steals Time is a must-play game, even if you’re not typically a fan of sim or farming games, making it one of the best games put out by Level-5 to date."

    Tech Gaming gave Level-5's new title a score of 93% on PC, describing it as a "charming and content-abundant life simulation RPG that skillfully blends crafting, combat, and exploration" although it felt the multiplayer mode was "limited" and "combat merely adequate".

    "The title’s tender storytelling and a stirring soundtrack make it a thoroughly rewarding solo adventure."

    Some players have also seemingly been caught off-guard by the design choices in the multiplayer component of the game.
    YouTube channel Miss Bubbles tried out the Switch version, but will probably be sticking with other versions of the game. As for the gameplay on Nintendo's hardware, the combat was "smooth and fine" and performance-wise the experience was "okay".

    "I really hope that the confirmed Switch 2 upgrade will be a good upgrade, but until then I'm just going to be playing on Steam Deck"

    And SwitchUp's YouTube review was a lot more glowing of the Switch version, awarding it a score of 91% and calling it "unmissable". On the performance front, online the frame rate apparently targets 30 but there are some big drops when there is more happening on-screen. In single-player though the frame rate is more consistent and visual quality is better, and in handheld the experience is "around about the same", with load times under 10 seconds.

    "I'd say visuals and performance they score 17 out of 20...Fantasy Life i: The Girl Who Steals Time is a wonderful experience, it's easily in my top five games of the year so far."

    If you are still on the fence about the new Fantasy Life game for Switch, you could always consider the Switch 2 paid upgrade. As previously detailed this version of the game promises to include improved resolution and frame rate. Pricing for this upgrade hasn't been confirmed yet, but it's expected to be just a few coins.
    Be on the lookout for the Nintendo Life Switch review of Fantasy Life i: The Girl Who Steals Time, which should be up on the site at some point next week. We've also got some direct video footage of the Switch release on our YouTube channel.

    Enjoy improved resolution and frame rate

    Fixes for Switch version delayed

    Anything but a slow start to life

    But does it steal frames?

    Are you playing the new Fantasy Life game? Have you tried out the Switch version or are you going to wait until the Switch 2 is out? Let us know in the comments.

    Related Games
    See Also

    Share:0
    2

    Liam is a news writer and reviewer across Hookshot Media. He's been writing about games for more than 15 years and is a lifelong fan of many iconic video game characters.

    Hold on there, you need to login to post a comment...

    Related Articles

    27 Upcoming Nintendo Switch 2 Games We're Excited For In 2025
    The very best Switch 2 games coming soon

    Shigeru Miyamoto Explains Why Donkey Kong Has Been Redesigned
    You want expressive? You got it

    Round Up: The First Impressions Of Fantasy Life i: The Girl Who Steals Time Are In
    Here's what players are saying

    Here's A Look At The Size And Inside Of Switch 2 Game Cases
    Arriving in store next month
    #round #reviews #are #fantasy #life
    Round Up: The Reviews Are In For Fantasy Life i: The Girl Who Steals Time
    Subscribe to Nintendo Life on YouTube814k Level-5's new game Fantasy Life i: The Girl Who Steals Time made its debut earlier this week and was generally well-received by fans. Now, in an update, the first reviews are beginning to surface online for the Switch version of the game and other platforms. IGN's review "in progress" notes how this new entry "seems to have nailed the balance between day-in-the-life cozy activities and more action-packed exploration to the point where it's really hard to predict what might happen next". It also mentions how the characters and story so far are both "wonderfully goofy and more substantial" - with the RPG-like Life system of levelling also "easy to get lost in".Subscribe to Nintendo Life on YouTube814kWatch on YouTube CGMagazine went hands-on with the PC version of the game awarding it a score of 9 out of 10 and calling it an improvement on the series in "every conceivable way" while also delivering one of the most "feature-rich games of the year". "Fantasy Life i: The Girl Who Steals Time is a must-play game, even if you’re not typically a fan of sim or farming games, making it one of the best games put out by Level-5 to date." Tech Gaming gave Level-5's new title a score of 93% on PC, describing it as a "charming and content-abundant life simulation RPG that skillfully blends crafting, combat, and exploration" although it felt the multiplayer mode was "limited" and "combat merely adequate". "The title’s tender storytelling and a stirring soundtrack make it a thoroughly rewarding solo adventure." Some players have also seemingly been caught off-guard by the design choices in the multiplayer component of the game. YouTube channel Miss Bubbles tried out the Switch version, but will probably be sticking with other versions of the game. As for the gameplay on Nintendo's hardware, the combat was "smooth and fine" and performance-wise the experience was "okay". "I really hope that the confirmed Switch 2 upgrade will be a good upgrade, but until then I'm just going to be playing on Steam Deck" And SwitchUp's YouTube review was a lot more glowing of the Switch version, awarding it a score of 91% and calling it "unmissable". On the performance front, online the frame rate apparently targets 30 but there are some big drops when there is more happening on-screen. In single-player though the frame rate is more consistent and visual quality is better, and in handheld the experience is "around about the same", with load times under 10 seconds. "I'd say visuals and performance they score 17 out of 20...Fantasy Life i: The Girl Who Steals Time is a wonderful experience, it's easily in my top five games of the year so far." If you are still on the fence about the new Fantasy Life game for Switch, you could always consider the Switch 2 paid upgrade. As previously detailed this version of the game promises to include improved resolution and frame rate. Pricing for this upgrade hasn't been confirmed yet, but it's expected to be just a few coins. Be on the lookout for the Nintendo Life Switch review of Fantasy Life i: The Girl Who Steals Time, which should be up on the site at some point next week. We've also got some direct video footage of the Switch release on our YouTube channel. Enjoy improved resolution and frame rate Fixes for Switch version delayed Anything but a slow start to life But does it steal frames? Are you playing the new Fantasy Life game? Have you tried out the Switch version or are you going to wait until the Switch 2 is out? Let us know in the comments. Related Games See Also Share:0 2 Liam is a news writer and reviewer across Hookshot Media. He's been writing about games for more than 15 years and is a lifelong fan of many iconic video game characters. Hold on there, you need to login to post a comment... Related Articles 27 Upcoming Nintendo Switch 2 Games We're Excited For In 2025 The very best Switch 2 games coming soon Shigeru Miyamoto Explains Why Donkey Kong Has Been Redesigned You want expressive? You got it Round Up: The First Impressions Of Fantasy Life i: The Girl Who Steals Time Are In Here's what players are saying Here's A Look At The Size And Inside Of Switch 2 Game Cases Arriving in store next month #round #reviews #are #fantasy #life
    WWW.NINTENDOLIFE.COM
    Round Up: The Reviews Are In For Fantasy Life i: The Girl Who Steals Time
    Subscribe to Nintendo Life on YouTube814k Level-5's new game Fantasy Life i: The Girl Who Steals Time made its debut earlier this week and was generally well-received by fans. Now, in an update, the first reviews are beginning to surface online for the Switch version of the game and other platforms. IGN's review "in progress" notes how this new entry "seems to have nailed the balance between day-in-the-life cozy activities and more action-packed exploration to the point where it's really hard to predict what might happen next". It also mentions how the characters and story so far are both "wonderfully goofy and more substantial" - with the RPG-like Life system of levelling also "easy to get lost in".Subscribe to Nintendo Life on YouTube814kWatch on YouTube CGMagazine went hands-on with the PC version of the game awarding it a score of 9 out of 10 and calling it an improvement on the series in "every conceivable way" while also delivering one of the most "feature-rich games of the year". "Fantasy Life i: The Girl Who Steals Time is a must-play game, even if you’re not typically a fan of sim or farming games, making it one of the best games put out by Level-5 to date." Tech Gaming gave Level-5's new title a score of 93% on PC, describing it as a "charming and content-abundant life simulation RPG that skillfully blends crafting, combat, and exploration" although it felt the multiplayer mode was "limited" and "combat merely adequate". "The title’s tender storytelling and a stirring soundtrack make it a thoroughly rewarding solo adventure." Some players have also seemingly been caught off-guard by the design choices in the multiplayer component of the game. YouTube channel Miss Bubbles tried out the Switch version, but will probably be sticking with other versions of the game. As for the gameplay on Nintendo's hardware, the combat was "smooth and fine" and performance-wise the experience was "okay". "I really hope that the confirmed Switch 2 upgrade will be a good upgrade, but until then I'm just going to be playing on Steam Deck" And SwitchUp's YouTube review was a lot more glowing of the Switch version, awarding it a score of 91% and calling it "unmissable". On the performance front, online the frame rate apparently targets 30 but there are some big drops when there is more happening on-screen. In single-player though the frame rate is more consistent and visual quality is better, and in handheld the experience is "around about the same", with load times under 10 seconds. "I'd say visuals and performance they score 17 out of 20...Fantasy Life i: The Girl Who Steals Time is a wonderful experience, it's easily in my top five games of the year so far." If you are still on the fence about the new Fantasy Life game for Switch, you could always consider the Switch 2 paid upgrade. As previously detailed this version of the game promises to include improved resolution and frame rate. Pricing for this upgrade hasn't been confirmed yet, but it's expected to be just a few coins. Be on the lookout for the Nintendo Life Switch review of Fantasy Life i: The Girl Who Steals Time, which should be up on the site at some point next week. We've also got some direct video footage of the Switch release on our YouTube channel. Enjoy improved resolution and frame rate Fixes for Switch version delayed Anything but a slow start to life But does it steal frames? Are you playing the new Fantasy Life game? Have you tried out the Switch version or are you going to wait until the Switch 2 is out? Let us know in the comments. Related Games See Also Share:0 2 Liam is a news writer and reviewer across Hookshot Media. He's been writing about games for more than 15 years and is a lifelong fan of many iconic video game characters. Hold on there, you need to login to post a comment... Related Articles 27 Upcoming Nintendo Switch 2 Games We're Excited For In 2025 The very best Switch 2 games coming soon Shigeru Miyamoto Explains Why Donkey Kong Has Been Redesigned You want expressive? You got it Round Up: The First Impressions Of Fantasy Life i: The Girl Who Steals Time Are In Here's what players are saying Here's A Look At The Size And Inside Of Switch 2 Game Cases Arriving in store next month
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com