• Download Unreal Engine 2D animation plugin Odyssey for free

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated.
    A serious professional 2D animation tool created by former TVPaint staff

    Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos.
    Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license.

    Create 2D animation, storyboards, or textures for 3D models

    Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes.
    It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport.
    The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets.
    There is also a vector toolset for creating hard-edged shapes.
    Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening.
    The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD.
    Available for free indefinitely, but future updates planned

    Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely.
    However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”.
    As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development.
    System requirements and availability

    Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use. about Odyssey on Praxinos’s website
    Find more detailed information in Odyssey’s online manual
    Download Unreal Engine 2D animation plugin Odyssey for free

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #download #unreal #engine #animation #plugin
    Download Unreal Engine 2D animation plugin Odyssey for free
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated. A serious professional 2D animation tool created by former TVPaint staff Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos. Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license. Create 2D animation, storyboards, or textures for 3D models Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes. It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport. The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets. There is also a vector toolset for creating hard-edged shapes. Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening. The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD. Available for free indefinitely, but future updates planned Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely. However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”. As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development. System requirements and availability Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use. about Odyssey on Praxinos’s website Find more detailed information in Odyssey’s online manual Download Unreal Engine 2D animation plugin Odyssey for free Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #download #unreal #engine #animation #plugin
    Download Unreal Engine 2D animation plugin Odyssey for free
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated. A serious professional 2D animation tool created by former TVPaint staff Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos. Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license. Create 2D animation, storyboards, or textures for 3D models Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes. It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport. The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets. There is also a vector toolset for creating hard-edged shapes. Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening. The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD. Available for free indefinitely, but future updates planned Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely. However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”. As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development. System requirements and availability Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use.Read more about Odyssey on Praxinos’s website Find more detailed information in Odyssey’s online manual Download Unreal Engine 2D animation plugin Odyssey for free Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commenti 0 condivisioni
  • Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR

    Anysphere, the maker of AI coding assistant Cursor, has raised million at a billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global.
    The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of million at a pre-money valuation of billion late last year, as TechCrunch was first to report. 
    AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenuehas been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed million in ARR, sources told Bloomberg, a 60% increase from the million we reported in mid-April.
    Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a Pro offering or a monthly business subscription.
    Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point.
    Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for billion.
    #cursors #anysphere #nabs #99b #valuation
    Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR
    Anysphere, the maker of AI coding assistant Cursor, has raised million at a billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global. The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of million at a pre-money valuation of billion late last year, as TechCrunch was first to report.  AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenuehas been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed million in ARR, sources told Bloomberg, a 60% increase from the million we reported in mid-April. Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a Pro offering or a monthly business subscription. Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point. Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for billion. #cursors #anysphere #nabs #99b #valuation
    TECHCRUNCH.COM
    Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR
    Anysphere, the maker of AI coding assistant Cursor, has raised $900 million at a $9.9 billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global. The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of $100 million at a pre-money valuation of $2.5 billion late last year, as TechCrunch was first to report.  AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenue (ARR) has been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed $500 million in ARR, sources told Bloomberg, a 60% increase from the $300 million we reported in mid-April. Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a $20 Pro offering or a $40 monthly business subscription. Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point. Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for $3 billion.
    Like
    Love
    Wow
    Angry
    Sad
    223
    0 Commenti 0 condivisioni
  • Design to Code with the Figma MCP Server

    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code.
    Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky.
    You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size?
    There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally.
    Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design
    Get the MCP server running in CursorSet up a quick target repo
    Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings.
    Go to the Security tab.
    Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{
    "mcpServers": {
    "Framelink Figma MCP": {
    "command": "npx",
    "args":}
    }
    }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface.

    Figma frame: <;

    Please use the Figma MCP server.

    Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */}
    <div className="p-6 bg-white border-t border-">
    <div className="flex items-center space-x-4">
    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/>
    </svg>
    </button>

    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/>
    </svg>
    </button>

    <div className="flex-1 relative">
    <div className="flex items-center bg-rounded-full px-4 py-3">
    <button className="p-1 rounded-full hover:bg-mr-3">
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/>
    </svg>
    </button>

    <input
    type="text"
    value={newMessage}
    onChange={=> setNewMessage}
    onKeyPress={handleKeyPress}
    placeholder="Type a message..."
    className="flex-1 bg-transparent outline-none text-placeholder-"
    />

    <button
    onClick={handleSendMessage}
    className="p-1 rounded-full hover:bg-ml-3"
    >
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/>
    </svg>
    </button>
    </div>
    </div>
    </div>
    </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code.
    Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc.
    Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating.
    Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list
    A CSS style guide
    A frameworkstyle guide
    Test suite rules
    Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right.
    There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority.
    It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent.
    Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    #design #code #with #figma #mcp
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in CursorSet up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args":} } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <; Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-mr-3"> <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={=> setNewMessage} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-placeholder-" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-ml-3" > <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A frameworkstyle guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering! #design #code #with #figma #mcp
    WWW.BUILDER.IO
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocol (MCP) servers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in Cursor (or your client of choice) Set up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP client (Cursor)Now that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args": ["-y", "figma-developer-mcp", "--figma-api-key=YOUR_FIGMA_ACCESS_TOKEN", "--stdio"] } } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layer(s) in Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <https://www.figma.com/design/CPDcrzkVChAzQ3q1pC5mXd/Figma-MCP-vs.-Builder-Fusion?node-id=2-215&t=K6v805pKyoU4FqdA-4> Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-[#E8DEF8]"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-[#ECE6F0] rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-[#D0BCFF] mr-3"> <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={(e) => setNewMessage(e.target.value)} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-[#1D1B20] placeholder-[#4A4459]" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-[#D0BCFF] ml-3" > <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files (with AI's help), and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A framework (i.e., React) style guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PR (with minimal diffs) for you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    0 Commenti 0 condivisioni
  • Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more

    Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more

    posted in GameDevDigest Newsletter

    Published May 23, 2025

    Advertisement

    This article was originally published on GameDevDigest.comEnjoy!“ZLinq”, a Zero-Allocation LINQ Library for .NET - I’ve released ZLinq v1 last month! By building on structs and generics, it achieves zero allocations. It includes extensions like LINQ to Span, LINQ to SIMD, LINQ to Tree, a drop-in replacement Source Generator for arbitrary types, and support for multiple platforms including .NET Standard 2.0, Unity, and Godot.neuecc.medium.comPathfinding - I've recently been working on the pathfinding for NPCs in my game, which is something I've been looking forward to for a while now since it's a nice chunky problem to solve. I thought I'd write up this post about how I went about it all.juhrjuhr.itch.ioMy Work at Unity - I worked as a developer at Unity Technologies from 2009 to 2020. When I started, there were around 20 employees worldwide and Unity was still largely unknown. When I left, there were over 3000 employees and Unity had become the most widely used game engine in the industry.runevision.comIndie Game Marketing Examples: Campaigns We Loved - From Crabs to Chess: Creative marketing lessons from indie game campaigns that really worked!impress.gamesMaking Video Games in 2025- I genuinely believe making games without a big "do everything" engine can be easier, more fun, and often less overhead. I am not making a "do everything" game and I do not need 90% of the features these engines provide. I am very particular about how my games feel and look, and how I interact with my tools.noelberry.caWelcome to Unity Design Patterns - Examples of programming design patterns in Unity C#NaphierPalette lighting tricks on the Nintendo 64 - Below I have some notes on the directional ambient and normal mapping techniques I developed. They are both pretty simple in the end but I haven’t seen them used elsewhere.30fps.netCan Itch.io Success Translate To Steam Success? - In my previous blog I looked at the stats for an itch.io game and what an over performing game looked like. Today I want to deep dive on a couple games that took their early itch.io success and parlayed it onto Steam with varied results.howtomarketagame.comWork with strings efficiently, keep the GC alive - This tip is not meant for everyone. If your code is simple, and not CPU-heavy, this tip might be overkill for your code, as it's about extremely heavy operations, where performance is crucial.old.reddit.comIndie Survival Guide - ProductsThe Indie Survival Guide is your ongoing archive of real talk and hard-won insights from the devs and industry experts making games happen—often against the odds. Whatever tools you’re using, this growing library of Q&As, livestreams, and VODs is here to help. There’s no magic formula, but we believe shared experience—across design, business, and survival—can give you the best shot.Unity
    VideosMultiplayer Systems in 10 Minutes/1 Hour/1 Day | Clocked and Loaded - Unity Developer Advocate Esteban Maldonado shows us how he scales multiplayer systems based on time constraints and how his approach differs depending on the circumstances.UnityFrom States to Trees: How Behavior Trees Revolutionized Game AI - In this final video of our NPC evolution series, we explore how Behavior Trees transformed game AI. Moving beyond the limitations of Finite State Machines, Behavior Trees introduced hierarchical decision-making that allowed NPCs to evaluate complex situations, prioritize actions, and respond intelligently to player choices.Mindplay with AaronInside Doom: The Dark Ages - Creating id Tech 8 - Interview With id Software - Want to know more about Doom: The Dark Ages and the technical make-up of the new id Tech 8? John Linneman has this extensive interview with id Software's Director of Engine Technology, Billy Khan. Every key aspect of the new technology is discussed here, along with answers to key questions like why The Dark Ages simply isn't possible without hardware accelerated ray tracing.Digital FoundryLet's Fix Unity's Animator - Let's fix the missing animation preview in Unity's Animator!Warped ImaginationWhy Did Older Games Feel So Much Bigger? - The evolution of game design has taken an interesting turn, where modern level design often feels more constrained despite technological advances. While retro games created vast worlds with limited resources, today's AAA games sometimes sacrifice that sense of wonder for visual fidelity.Devin ChaseMind-blowing graphical tricks in classic games - Your questions answered! | White_Pointer Gaming - It's time to answer even more viewer questions about how classic games achieved their graphical tricks! This video includes not just Mega Drive/Genesis and Super Nintendo games, but Neo Geo as well. Plus the big one that you might have been waiting for - Final Fantasy VI / Final Fantasy III! What mindblowing tricks will be unveiled this time?White_Pointer GamingJetBrains AI Assistant Just Got a Lot More Useful- JetBrains AI Assistant, improved in version 2025.1 with enhanced context awareness and deeper IDE integration, brings intelligent code generation, inline prompts, and web-enhanced context directly into our workflow. Together, we’ll explore how it uses these upgrades to incorporate external knowledge into its suggestions as we refactor a simple C# class into a clean and reusable programming pattern—then save that refactoring as a custom prompt for future use.git-amend
    AssetsLevel Up: 5K World Building Assets Bundle - Build the game of your dreams in any setting or scenario with our Level Up: 5K World Building Assets Bundle.__The Supreme Unreal & Unity Game Dev Bundle - Dive into an asset collection that offers the widest range of stylized towns, buildings, and more with The Supreme Unreal & Unity Game Dev Bundle! time and money by accessing this library of 50+ asset sets, ranging from medieval Viking villages to deserted military outposts—specific standouts include Whispering Grove Environment and Asian Dynasty Environment. Get the assets you need to help bring your game to life, and help support the charity of your choice with your purchase!Humble Bundle AffiliatePoiyomiToonShader - A feature rich toon shader for unity and VR Chatpoiyomi Open SourceAPFrameworkUI - A Text Mesh Pro based text only UI system for Unitydklassic Open SourceUnityProcgen - Library of procedural generation code for use in Unitycoryleach Open SourceGeneLit - GeneLit is an alternative to standard shader for Unity built-in pipeline.momoma-null Open Sourcebarelymusician - a real-time music engine.anokta Open SourceColliderMeshTool - Generate custom mesh colliders in Unity with hulls or hand-drawn outlines.SinlessDevil Open SourceGraphlit - Custom node shader editor for Unityz3y Open SourceEasy Peasy First Person Controller- Easy Peasy First Person Controller is a user-friendly, ready-to-use first-person controller for Unity. It provides a wide range of customizable features for seamless integration into your game.assetstore.unity.com AffiliateUnityInGameConsole - A powerful Command Line Processor and log viewer for Unity. It can be run in the editor or in a built out player for any platform, allowing you to see your log and callstacks in you final product, without having to search for unity log files.ArtOfSettling Open SourceDescant - An enhanced and user-friendly Unity dialogue system pluginOwmacohe Open Sourceposition-visualizer - Unity editor tool to visualize positions in the scene.mminer Open SourceLua-CSharp - High performance Lua interpreter implemented in C# for .NET and Unitynuskey8 Open SourceUnityNativeFilePicker - A native Unity plugin to import/export files from/to various document providers on Android & iOSyasirkula Open Sourceusyrup - A runtime dependency injection framework for the Unity Game Engine!Jeffan207 Open SourceUnityIngameDebugConsole - A uGUI based console to see debug messages and execute commands during gameplay in Unityyasirkula Open SourceContentManagementSystem - CMS based on XK's realization for unitymegurte Open SourceIsoMesh - IsoMesh is a group of related tools for Unity for converting meshes into signed distance field data, raymarching signed distance fields, and extracting signed distance field data back to meshes via surface nets or dual contouring.EmmetOT Open SourceNavigathena - Scene management framework for Unity. Provides a new generation of scene management.mackysoft Open SourceUI Cursors - UI for Mouse Cursors is a great start for the development journey you are looking for either testing or a finished game it offers a great variety. Animatable and make the game look alive!verzatiledev.itch.io25% Off Unity Asset Store - Get 25% off your next purchase—even on discounted assets! Use code TWXJ982ND at checkout and keep building something amazing. Limited to 5 redemptions.Unity AffiliateShop up to 50% off Kyeoms - Publisher Sale - I'm an individual game VFX artist. I'm interested in Cartoon style and Stylized VFX. PLUS, get New Stylized Explosion Package for FREE with code KYEOMS2025Unity AffiliateUltimate World Building Asset Bundle - Imagine it—build it—love it Elevate your next project with elite 3D assets, textures, references, and more from the Ultimate World Building Asset Bundle by ScansMatter—featuring 300 free commercial credits on ScansMatter.com, Rooftop Asset Kit, Office Environment Kit, and much more. This limited-time partnership with ScansMatter gives Humble Bundle members a unique opportunity to access countless professional-quality assets at a fraction of the price. Get the assets you need to bring your next visual project to life—and help support the World Wildlife Fund with your purchase!Humble Bundle AffiliateUnlock Pro 3D Modeling Skills With Blender - Software Bundle - Unlock awesome 3D tools for Blender__
    SpotlightBrine - An upcoming boomer shooter from Studio Whalefall, a 3rd year university team from Falmouth's Games Academy.Slippery fishy enemies are attacking your quaint Cornish town, and it's up to one disgruntled fisherman to save the day. Fight your way through waves of local seafood and paint the town with red.Studio WhalefalMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.

    Comments

    Nobody has left a comment. You can be the first!

    You must log in to join the conversation.
    Don't have a GameDev.net account? Sign up!
    #game #dev #digest #issue #retro
    Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more
    Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more posted in GameDevDigest Newsletter Published May 23, 2025 Advertisement This article was originally published on GameDevDigest.comEnjoy!“ZLinq”, a Zero-Allocation LINQ Library for .NET - I’ve released ZLinq v1 last month! By building on structs and generics, it achieves zero allocations. It includes extensions like LINQ to Span, LINQ to SIMD, LINQ to Tree, a drop-in replacement Source Generator for arbitrary types, and support for multiple platforms including .NET Standard 2.0, Unity, and Godot.neuecc.medium.comPathfinding - I've recently been working on the pathfinding for NPCs in my game, which is something I've been looking forward to for a while now since it's a nice chunky problem to solve. I thought I'd write up this post about how I went about it all.juhrjuhr.itch.ioMy Work at Unity - I worked as a developer at Unity Technologies from 2009 to 2020. When I started, there were around 20 employees worldwide and Unity was still largely unknown. When I left, there were over 3000 employees and Unity had become the most widely used game engine in the industry.runevision.comIndie Game Marketing Examples: Campaigns We Loved - From Crabs to Chess: Creative marketing lessons from indie game campaigns that really worked!impress.gamesMaking Video Games in 2025- I genuinely believe making games without a big "do everything" engine can be easier, more fun, and often less overhead. I am not making a "do everything" game and I do not need 90% of the features these engines provide. I am very particular about how my games feel and look, and how I interact with my tools.noelberry.caWelcome to Unity Design Patterns - Examples of programming design patterns in Unity C#NaphierPalette lighting tricks on the Nintendo 64 - Below I have some notes on the directional ambient and normal mapping techniques I developed. They are both pretty simple in the end but I haven’t seen them used elsewhere.30fps.netCan Itch.io Success Translate To Steam Success? - In my previous blog I looked at the stats for an itch.io game and what an over performing game looked like. Today I want to deep dive on a couple games that took their early itch.io success and parlayed it onto Steam with varied results.howtomarketagame.comWork with strings efficiently, keep the GC alive - This tip is not meant for everyone. If your code is simple, and not CPU-heavy, this tip might be overkill for your code, as it's about extremely heavy operations, where performance is crucial.old.reddit.comIndie Survival Guide - ProductsThe Indie Survival Guide is your ongoing archive of real talk and hard-won insights from the devs and industry experts making games happen—often against the odds. Whatever tools you’re using, this growing library of Q&As, livestreams, and VODs is here to help. There’s no magic formula, but we believe shared experience—across design, business, and survival—can give you the best shot.Unity VideosMultiplayer Systems in 10 Minutes/1 Hour/1 Day | Clocked and Loaded - Unity Developer Advocate Esteban Maldonado shows us how he scales multiplayer systems based on time constraints and how his approach differs depending on the circumstances.UnityFrom States to Trees: How Behavior Trees Revolutionized Game AI - In this final video of our NPC evolution series, we explore how Behavior Trees transformed game AI. Moving beyond the limitations of Finite State Machines, Behavior Trees introduced hierarchical decision-making that allowed NPCs to evaluate complex situations, prioritize actions, and respond intelligently to player choices.Mindplay with AaronInside Doom: The Dark Ages - Creating id Tech 8 - Interview With id Software - Want to know more about Doom: The Dark Ages and the technical make-up of the new id Tech 8? John Linneman has this extensive interview with id Software's Director of Engine Technology, Billy Khan. Every key aspect of the new technology is discussed here, along with answers to key questions like why The Dark Ages simply isn't possible without hardware accelerated ray tracing.Digital FoundryLet's Fix Unity's Animator - Let's fix the missing animation preview in Unity's Animator!Warped ImaginationWhy Did Older Games Feel So Much Bigger? - The evolution of game design has taken an interesting turn, where modern level design often feels more constrained despite technological advances. While retro games created vast worlds with limited resources, today's AAA games sometimes sacrifice that sense of wonder for visual fidelity.Devin ChaseMind-blowing graphical tricks in classic games - Your questions answered! | White_Pointer Gaming - It's time to answer even more viewer questions about how classic games achieved their graphical tricks! This video includes not just Mega Drive/Genesis and Super Nintendo games, but Neo Geo as well. Plus the big one that you might have been waiting for - Final Fantasy VI / Final Fantasy III! What mindblowing tricks will be unveiled this time?White_Pointer GamingJetBrains AI Assistant Just Got a Lot More Useful- JetBrains AI Assistant, improved in version 2025.1 with enhanced context awareness and deeper IDE integration, brings intelligent code generation, inline prompts, and web-enhanced context directly into our workflow. Together, we’ll explore how it uses these upgrades to incorporate external knowledge into its suggestions as we refactor a simple C# class into a clean and reusable programming pattern—then save that refactoring as a custom prompt for future use.git-amend AssetsLevel Up: 5K World Building Assets Bundle - Build the game of your dreams in any setting or scenario with our Level Up: 5K World Building Assets Bundle.__The Supreme Unreal & Unity Game Dev Bundle - Dive into an asset collection that offers the widest range of stylized towns, buildings, and more with The Supreme Unreal & Unity Game Dev Bundle! time and money by accessing this library of 50+ asset sets, ranging from medieval Viking villages to deserted military outposts—specific standouts include Whispering Grove Environment and Asian Dynasty Environment. Get the assets you need to help bring your game to life, and help support the charity of your choice with your purchase!Humble Bundle AffiliatePoiyomiToonShader - A feature rich toon shader for unity and VR Chatpoiyomi Open SourceAPFrameworkUI - A Text Mesh Pro based text only UI system for Unitydklassic Open SourceUnityProcgen - Library of procedural generation code for use in Unitycoryleach Open SourceGeneLit - GeneLit is an alternative to standard shader for Unity built-in pipeline.momoma-null Open Sourcebarelymusician - a real-time music engine.anokta Open SourceColliderMeshTool - Generate custom mesh colliders in Unity with hulls or hand-drawn outlines.SinlessDevil Open SourceGraphlit - Custom node shader editor for Unityz3y Open SourceEasy Peasy First Person Controller- Easy Peasy First Person Controller is a user-friendly, ready-to-use first-person controller for Unity. It provides a wide range of customizable features for seamless integration into your game.assetstore.unity.com AffiliateUnityInGameConsole - A powerful Command Line Processor and log viewer for Unity. It can be run in the editor or in a built out player for any platform, allowing you to see your log and callstacks in you final product, without having to search for unity log files.ArtOfSettling Open SourceDescant - An enhanced and user-friendly Unity dialogue system pluginOwmacohe Open Sourceposition-visualizer - Unity editor tool to visualize positions in the scene.mminer Open SourceLua-CSharp - High performance Lua interpreter implemented in C# for .NET and Unitynuskey8 Open SourceUnityNativeFilePicker - A native Unity plugin to import/export files from/to various document providers on Android & iOSyasirkula Open Sourceusyrup - A runtime dependency injection framework for the Unity Game Engine!Jeffan207 Open SourceUnityIngameDebugConsole - A uGUI based console to see debug messages and execute commands during gameplay in Unityyasirkula Open SourceContentManagementSystem - CMS based on XK's realization for unitymegurte Open SourceIsoMesh - IsoMesh is a group of related tools for Unity for converting meshes into signed distance field data, raymarching signed distance fields, and extracting signed distance field data back to meshes via surface nets or dual contouring.EmmetOT Open SourceNavigathena - Scene management framework for Unity. Provides a new generation of scene management.mackysoft Open SourceUI Cursors - UI for Mouse Cursors is a great start for the development journey you are looking for either testing or a finished game it offers a great variety. Animatable and make the game look alive!verzatiledev.itch.io25% Off Unity Asset Store - Get 25% off your next purchase—even on discounted assets! Use code TWXJ982ND at checkout and keep building something amazing. Limited to 5 redemptions.Unity AffiliateShop up to 50% off Kyeoms - Publisher Sale - I'm an individual game VFX artist. I'm interested in Cartoon style and Stylized VFX. PLUS, get New Stylized Explosion Package for FREE with code KYEOMS2025Unity AffiliateUltimate World Building Asset Bundle - Imagine it—build it—love it Elevate your next project with elite 3D assets, textures, references, and more from the Ultimate World Building Asset Bundle by ScansMatter—featuring 300 free commercial credits on ScansMatter.com, Rooftop Asset Kit, Office Environment Kit, and much more. This limited-time partnership with ScansMatter gives Humble Bundle members a unique opportunity to access countless professional-quality assets at a fraction of the price. Get the assets you need to bring your next visual project to life—and help support the World Wildlife Fund with your purchase!Humble Bundle AffiliateUnlock Pro 3D Modeling Skills With Blender - Software Bundle - Unlock awesome 3D tools for Blender__ SpotlightBrine - An upcoming boomer shooter from Studio Whalefall, a 3rd year university team from Falmouth's Games Academy.Slippery fishy enemies are attacking your quaint Cornish town, and it's up to one disgruntled fisherman to save the day. Fight your way through waves of local seafood and paint the town with red.Studio WhalefalMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. Comments Nobody has left a comment. You can be the first! You must log in to join the conversation. Don't have a GameDev.net account? Sign up! #game #dev #digest #issue #retro
    Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more
    Game Dev Digest Issue #283 - Retro, Graphics Tricks, Multiplayer, and more posted in GameDevDigest Newsletter Published May 23, 2025 Advertisement This article was originally published on GameDevDigest.comEnjoy!“ZLinq”, a Zero-Allocation LINQ Library for .NET - I’ve released ZLinq v1 last month! By building on structs and generics, it achieves zero allocations. It includes extensions like LINQ to Span, LINQ to SIMD, LINQ to Tree (FileSystem, JSON, GameObject, etc.), a drop-in replacement Source Generator for arbitrary types, and support for multiple platforms including .NET Standard 2.0, Unity, and Godot.neuecc.medium.comPathfinding - I've recently been working on the pathfinding for NPCs in my game, which is something I've been looking forward to for a while now since it's a nice chunky problem to solve. I thought I'd write up this post about how I went about it all.juhrjuhr.itch.ioMy Work at Unity - I worked as a developer at Unity Technologies from 2009 to 2020. When I started, there were around 20 employees worldwide and Unity was still largely unknown. When I left, there were over 3000 employees and Unity had become the most widely used game engine in the industry.runevision.comIndie Game Marketing Examples: Campaigns We Loved - From Crabs to Chess: Creative marketing lessons from indie game campaigns that really worked!impress.gamesMaking Video Games in 2025 (without an engine) - I genuinely believe making games without a big "do everything" engine can be easier, more fun, and often less overhead. I am not making a "do everything" game and I do not need 90% of the features these engines provide. I am very particular about how my games feel and look, and how I interact with my tools.noelberry.caWelcome to Unity Design Patterns - Examples of programming design patterns in Unity C#NaphierPalette lighting tricks on the Nintendo 64 - Below I have some notes on the directional ambient and normal mapping techniques I developed. They are both pretty simple in the end but I haven’t seen them used elsewhere.30fps.netCan Itch.io Success Translate To Steam Success? - In my previous blog I looked at the stats for an itch.io game and what an over performing game looked like. Today I want to deep dive on a couple games that took their early itch.io success and parlayed it onto Steam with varied results.howtomarketagame.comWork with strings efficiently, keep the GC alive - This tip is not meant for everyone. If your code is simple, and not CPU-heavy, this tip might be overkill for your code, as it's about extremely heavy operations, where performance is crucial.old.reddit.comIndie Survival Guide - ProductsThe Indie Survival Guide is your ongoing archive of real talk and hard-won insights from the devs and industry experts making games happen—often against the odds. Whatever tools you’re using, this growing library of Q&As, livestreams, and VODs is here to help. There’s no magic formula, but we believe shared experience—across design, business, and survival—can give you the best shot.Unity VideosMultiplayer Systems in 10 Minutes/1 Hour/1 Day | Clocked and Loaded - Unity Developer Advocate Esteban Maldonado shows us how he scales multiplayer systems based on time constraints and how his approach differs depending on the circumstances.UnityFrom States to Trees: How Behavior Trees Revolutionized Game AI - In this final video of our NPC evolution series, we explore how Behavior Trees transformed game AI. Moving beyond the limitations of Finite State Machines, Behavior Trees introduced hierarchical decision-making that allowed NPCs to evaluate complex situations, prioritize actions, and respond intelligently to player choices.Mindplay with AaronInside Doom: The Dark Ages - Creating id Tech 8 - Interview With id Software - Want to know more about Doom: The Dark Ages and the technical make-up of the new id Tech 8? John Linneman has this extensive interview with id Software's Director of Engine Technology, Billy Khan. Every key aspect of the new technology is discussed here, along with answers to key questions like why The Dark Ages simply isn't possible without hardware accelerated ray tracing.Digital FoundryLet's Fix Unity's Animator - Let's fix the missing animation preview in Unity's Animator!Warped ImaginationWhy Did Older Games Feel So Much Bigger? - The evolution of game design has taken an interesting turn, where modern level design often feels more constrained despite technological advances. While retro games created vast worlds with limited resources, today's AAA games sometimes sacrifice that sense of wonder for visual fidelity.Devin ChaseMind-blowing graphical tricks in classic games - Your questions answered! | White_Pointer Gaming - It's time to answer even more viewer questions about how classic games achieved their graphical tricks! This video includes not just Mega Drive/Genesis and Super Nintendo games, but Neo Geo as well. Plus the big one that you might have been waiting for - Final Fantasy VI / Final Fantasy III! What mindblowing tricks will be unveiled this time?White_Pointer GamingJetBrains AI Assistant Just Got a Lot More Useful (and FREE) - JetBrains AI Assistant, improved in version 2025.1 with enhanced context awareness and deeper IDE integration, brings intelligent code generation, inline prompts, and web-enhanced context directly into our workflow. Together, we’ll explore how it uses these upgrades to incorporate external knowledge into its suggestions as we refactor a simple C# class into a clean and reusable programming pattern—then save that refactoring as a custom prompt for future use.git-amend AssetsLevel Up: 5K World Building Assets Bundle - Build the game of your dreams in any setting or scenario with our Level Up: 5K World Building Assets Bundle.__The Supreme Unreal & Unity Game Dev Bundle - Dive into an asset collection that offers the widest range of stylized towns, buildings, and more with The Supreme Unreal & Unity Game Dev Bundle! Save time and money by accessing this library of 50+ asset sets, ranging from medieval Viking villages to deserted military outposts—specific standouts include Whispering Grove Environment and Asian Dynasty Environment. Get the assets you need to help bring your game to life, and help support the charity of your choice with your purchase!Humble Bundle AffiliatePoiyomiToonShader - A feature rich toon shader for unity and VR Chatpoiyomi Open SourceAPFrameworkUI - A Text Mesh Pro based text only UI system for Unitydklassic Open SourceUnityProcgen - Library of procedural generation code for use in Unitycoryleach Open SourceGeneLit - GeneLit is an alternative to standard shader for Unity built-in pipeline.momoma-null Open Sourcebarelymusician - a real-time music engine.anokta Open SourceColliderMeshTool - Generate custom mesh colliders in Unity with hulls or hand-drawn outlines.SinlessDevil Open SourceGraphlit - Custom node shader editor for Unityz3y Open SourceEasy Peasy First Person Controller (FREE) - Easy Peasy First Person Controller is a user-friendly, ready-to-use first-person controller for Unity. It provides a wide range of customizable features for seamless integration into your game.assetstore.unity.com AffiliateUnityInGameConsole - A powerful Command Line Processor and log viewer for Unity. It can be run in the editor or in a built out player for any platform, allowing you to see your log and callstacks in you final product, without having to search for unity log files.ArtOfSettling Open SourceDescant - An enhanced and user-friendly Unity dialogue system pluginOwmacohe Open Sourceposition-visualizer - Unity editor tool to visualize positions in the scene.mminer Open SourceLua-CSharp - High performance Lua interpreter implemented in C# for .NET and Unitynuskey8 Open SourceUnityNativeFilePicker - A native Unity plugin to import/export files from/to various document providers on Android & iOSyasirkula Open Sourceusyrup - A runtime dependency injection framework for the Unity Game Engine!Jeffan207 Open SourceUnityIngameDebugConsole - A uGUI based console to see debug messages and execute commands during gameplay in Unityyasirkula Open SourceContentManagementSystem - CMS based on XK's realization for unitymegurte Open SourceIsoMesh - IsoMesh is a group of related tools for Unity for converting meshes into signed distance field data, raymarching signed distance fields, and extracting signed distance field data back to meshes via surface nets or dual contouring.EmmetOT Open SourceNavigathena - Scene management framework for Unity. Provides a new generation of scene management.mackysoft Open SourceUI Cursors - UI for Mouse Cursors is a great start for the development journey you are looking for either testing or a finished game it offers a great variety. Animatable and make the game look alive!verzatiledev.itch.io25% Off Unity Asset Store - Get 25% off your next purchase—even on discounted assets! Use code TWXJ982ND at checkout and keep building something amazing. Limited to 5 redemptions.Unity AffiliateShop up to 50% off Kyeoms - Publisher Sale - I'm an individual game VFX artist. I'm interested in Cartoon style and Stylized VFX. PLUS, get New Stylized Explosion Package for FREE with code KYEOMS2025Unity AffiliateUltimate World Building Asset Bundle - Imagine it—build it—love it Elevate your next project with elite 3D assets, textures, references, and more from the Ultimate World Building Asset Bundle by ScansMatter—featuring 300 free commercial credits on ScansMatter.com, Rooftop Asset Kit, Office Environment Kit, and much more. This limited-time partnership with ScansMatter gives Humble Bundle members a unique opportunity to access countless professional-quality assets at a fraction of the price. Get the assets you need to bring your next visual project to life—and help support the World Wildlife Fund with your purchase!Humble Bundle AffiliateUnlock Pro 3D Modeling Skills With Blender - Software Bundle - Unlock awesome 3D tools for Blender__ SpotlightBrine - An upcoming boomer shooter from Studio Whalefall, a 3rd year university team from Falmouth's Games Academy.Slippery fishy enemies are attacking your quaint Cornish town, and it's up to one disgruntled fisherman to save the day. Fight your way through waves of local seafood and paint the town with red.[Get the demo on Itch.io]Studio WhalefalMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. Comments Nobody has left a comment. You can be the first! You must log in to join the conversation. Don't have a GameDev.net account? Sign up!
    0 Commenti 0 condivisioni
  • AI Pace Layers: a framework for resilient product design

    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layersIn his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets..“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built.Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layersas sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientistshave been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fastand slowlayers. .Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UXJesse James Garrett’s classic The Elements of User Experiencepresents a five-layer model for digital design:SurfaceSkeletonStructureScopeStrategyStructure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fastFocus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly. Real-time adaptations to context.SkinPace of change: Moderately fastFocus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns, content style guides.ServicesPace of change: Wildly variableFocus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change.• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external toolsContinual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slowFocus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: SlowFocus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements, error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slowFocus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troublesBookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline. But it lacks effective tools for the job. Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work.b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks, as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemmaThousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI modelis familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecturewas tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms, MedicalMind’s AI can recognize potentially valuable cross-specialty patterns. But these insights can’t be optimally organized and presented because the underlying information architecturedoesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecturethat can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and stylesspecifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneckScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI enginerecently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interfaceremains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results, the AI generates a rich, multidimensional analysis, but must flatten this complex reasoning into linear text. Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualizationthat helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #pace #layers #framework #resilient #product
    AI Pace Layers: a framework for resilient product design
    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layersIn his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets..“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built.Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layersas sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientistshave been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fastand slowlayers. .Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UXJesse James Garrett’s classic The Elements of User Experiencepresents a five-layer model for digital design:SurfaceSkeletonStructureScopeStrategyStructure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fastFocus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly. Real-time adaptations to context.SkinPace of change: Moderately fastFocus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns, content style guides.ServicesPace of change: Wildly variableFocus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change.• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external toolsContinual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slowFocus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: SlowFocus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements, error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slowFocus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troublesBookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline. But it lacks effective tools for the job. Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work.b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks, as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemmaThousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI modelis familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecturewas tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms, MedicalMind’s AI can recognize potentially valuable cross-specialty patterns. But these insights can’t be optimally organized and presented because the underlying information architecturedoesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecturethat can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and stylesspecifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneckScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI enginerecently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interfaceremains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results, the AI generates a rich, multidimensional analysis, but must flatten this complex reasoning into linear text. Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualizationthat helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #pace #layers #framework #resilient #product
    UXDESIGN.CC
    AI Pace Layers: a framework for resilient product design
    Designing human-centered AI products can be arduous.Keeping up with the overall pace of change isn’t easy. But here’s a bigger challenge:The wildly different paces of change attached to the key elements of AI product strategy, design, and development can make managing those elements — and even thinking about them — overwhelming.Yesterday’s design processes and frameworks offer priceless guidance that still holds. But in many spots, they just don’t fit today’s environment.For instance, designers used to map out and user-test precise, predictable end-to-end screen flows. But flows are no longer precisely predictable. AI generates dynamic dialogues and custom-tailored flows on the fly, rendering much of the old practice unhelpful and infeasible.It’s easy for product teams to feel adrift nowadays — we can hoist the sails, but we’re missing a map and a rudder. We need frameworks tailored to the traits that fundamentally set AI apart from traditional software, including:its capabilities for autonomy and collaboration,its probabilistic nature,its early need for quality data, andits predictable unpredictability. Humans tend to be perpetually surprised by its abilities — and its inabilities.AI pace layers: design for resilienceHere’s a framework to address these challenges.Building on Stewart Brand’s “Shearing Layers” framework, AI Pace Layers helps teams grow thriving AI products by framing them as layered systems with components that function and evolve at different timescales.It helps anticipate points of friction and create resilient and humane products.Each layer represents a specific domain of activity and responsibility, with a distinct pace of change.* Unlike the other layers, Services cuts across multiple layers rather than sitting between them, and its pace of change fluctuates erratically.Boundaries between layers call for special attention and care — friction at these points can produce destructive shearing and constructive turbulence.I’ll dive deeper into this framework with some practical examples showing how it works. But first, a brief review of the precursors that inspired this framework will help you put it to good use.The foundationsThis model builds on the insights of several influential design frameworks from the professions of building architecture and traditional software design.Shearing layers (Duffy and Brand)In his 1994 book How Buildings Learn, Stewart Brand expanded on architect Frank Duffy’s concept of shearing layers. The core insight: buildings consist of components that change at different rates.Shell, Services, Scenery, and Sets. (Frank Duffy, 1992).“…there isn’t any such thing as a building. A building properly conceived is several layers of longevity of built components.” — Frank DuffyShearing Layers of Change, from How Buildings Learn: What Happens after they’re built (Stewart Brand, 1994).Expanding on Duffy’s work, Brand identified six layers, from the slow-changing “Site” to the rapidly evolving “Stuff.”As the layers move at different speeds, friction forms where they meet. Buildings designed without mindful consideration of these different velocities tear themselves apart at these “shearing” points. Before long, they tend to be demolished and replaced.Buildings designed for resiliency allow for “slippage” between the moving layers — flexibility for the different rates of change to unfold with minimal conflict. Such buildings can thrive and remain useful for hundreds of years.Pace layers (Brand)In 1999, Brand drew insights from ecologists to expand this concept beyond buildings and encompass human society. In The Clock Of The Long Now: Time And Responsibility, he proposed “Pace Layers” — six levels ranging from rapid fashion to glacially-slow nature.Brand’s Pace Layers (1999) as sketched by Jono Hey.Brand again pointed out the boundaries, where the most intriguing and consequential changes emerge. Friction at the tension points can tear a building apart — or spur a civilization’s collapse–when we try to bind the layers too tightly together. But with mindful design and planning for slippage, activity along these boundary zones can also generate “constructive turbulence” that keeps systems balanced and resilient.The most successful systems survive and thrive through times of change through resiliency, by absorbing and incorporating shocks.“…a few scientists (such as R. V. O’Neill and C. S. Holling) have been probing the same issue in ecological systems: how do they manage change, how do they absorb and incorporate shocks? The answer appears to lie in the relationship between components in a system that have different change-rates and different scales of size. Instead of breaking under stress like something brittle, these systems yield as if they were soft. Some parts respond quickly to the shock, allowing slower parts to ignore the shock and maintain their steady duties of system continuity.” — Stewart BrandRoles and tendencies of the fast (upper) and slow (lower) layers. (Brand).Slower layers provide constraints and underpinnings for the faster layers, while faster layers induce adaptations in the slower layers that evolve the system.Elements of UX (Garrett)Jesse James Garrett’s classic The Elements of User Experience (2002) presents a five-layer model for digital design:Surface (visual design)Skeleton (interface design, navigation design, information design)Structure (interaction design, information architecture)Scope (functional specs, content requirements)Strategy (user needs, site objectives)Structure, Scope, and Strategy. Each layer answers a different set of questions, with the questions answered at each level setting constraints for the levels above. Lower layers set boundaries and underpinnings that help define the more concrete layers.Jesse James Garrett’s 5 layers from The Elements of User Experience Design (2002)This framework doesn’t focus on time, or on tension points resulting from conflicting velocities. But it provides a comprehensive structure for shaping different aspects of digital product design, from abstract strategy to concrete surface elements.AI Pace Layers: diving deeperBuilding on these foundations, the AI Pace Layers framework adapts these concepts specifically for AI systems design.Let’s explore each layer and understand how design expertise contributes across the framework.SessionsPace of change: Very fast (milliseconds to minutes)Focus: Performance of real-time interactions.This layer encompasses real-time dialogue, reasoning, and processing. These interplays happen between the user and AI, and between AI agents and other services and people, on behalf of the user. Sessions draw on lower-layer capabilities and components to deliver the “moments of truth” where product experiences succeed or fail. Feedback from the Sessions layer is crucial for improving and evolving the lower layers.Key contributors: Users and AI agents — usually with zero direct human involvement backstage.Example actions/decisions/artifacts: User/AI dialogue. Audio, video, text, images, and widgets are rendered on the fly (using building blocks provided by lower levels). Real-time adaptations to context.SkinPace of change: Moderately fast (days to months)Focus: Design patterns, guidelines, and assetsSkin encompasses visual, interaction, and content design.Key contributors: Designers, content strategists, front-end developers, and user researchers.Design’s role: This is where designers’ traditional expertise shines. They craft the interface elements, establish visual language, define interaction patterns, and create the design systems that represent the product’s capabilities to users.Example actions/decisions/artifacts: UI component libraries, brand guidelines, prompt templates, tone of voice guidelines, navigation systems, visual design systems, patterns (UI, interaction, and conversation), content style guides.ServicesPace of change: Wildly variable (slow to moderately fast)Focus: AI computation capabilities, data systems orchestration, and operational intelligenceThe Services layer provides probabilistic AI capabilities that sometimes feel like superpowers — and like superpowers, they can be difficult to manage. It encompasses foundation models, algorithms, data pipelines, evaluation frameworks, business logic, and computing resources.Services is an outlier that behaves differently from the other layers:• It’s more prone to “shocks” and surprises that can ripple across the rest of the system.• It varies wildly in pace of change. (But its components rarely change faster than Skin, or slower than Skeleton.)• It cuts across multiple layers rather than sitting between two of them. That produces more cross-layer boundaries, more tension points, more risks of destructive friction, and more opportunities for constructive turbulence.Key contributors: Data scientists, engineers, service designers, ethicists, product teamsDesign’s role: Designers partner with technical teams on evaluation frameworks, helping define what “good” looks like from a human experience perspective. They contribute to guardrails, monitoring systems, and multi-agent collaboration patterns, ensuring technical capabilities translate to meaningful human experiences. Service design expertise helps orchestrate complex, multi-touchpoint AI capabilities.Example actions/decisions/artifacts: Foundation model selection, changes, and fine-tuning. Evals, monitoring systems, guardrails, performance metrics. Business rules, workflow orchestration. Multiagent collaboration and use of external tools (APIs, A2A, MCP, etc.) Continual appraisal and adoption of new tools, protocols, and capabilities.SkeletonPace of change: Moderately slow (months) Focus: Fundamental structure and organizationThis layer establishes the foundational architecture — the core interaction models, information architecture and organizing principles.Key contributors: Information architects, information designers, user researchers, system architects, engineersDesign’s role: Designers with information architecture expertise are important in this layer. They design taxonomies, knowledge graphs, and classification systems that make complex AI capabilities comprehensible and usable. UX researchers help ensure these structures fit the audience’s mental models, contexts, and expectations.Example actions/decisions/artifacts: Taxonomies, knowledge graphs, data models, system architecture, classification systems.ScopePace of change: Slow (months to years)Focus: Product requirementsThis layer defines core functional, content, and data requirements, accounting for the probabilistic nature of AI and defining acceptable levels of performance and variance.Key contributors: Product managers, design strategists, design researchers, business stakeholders, data scientists, trust & safety specialistsDesign’s role: Design researchers and strategists contribute to requirements through generative and exploratory research. They help define error taxonomies and acceptable failure modes from a user perspective, informing metrics that capture technical performance and human experience quality. Design strategists balance technical possibilities with human needs and ethical considerations.Example actions/decisions/artifacts: Product requirements documents specifying reliability thresholds, data requirements (volume, diversity, quality standards), error taxonomies and acceptable failure modes, performance metrics frameworks, responsible AI requirements, risk assessment, core user stories and journeys, documentation of expected model variance and handling approaches.StrategyPace of change: Very slow (years)Focus: Long-term vision and business goalsThis foundation layer defines audience needs, core problems to solve, and business goals. In AI products, data strategy is central.Key contributors: Executive leadership, design leaders, product leadership, business strategists, ethics boardsDesign’s role: Design leaders define problem spaces, identify opportunities, and plan roadmaps. They deliver a balance of business needs with human values in strategy development. Designers with expertise in responsible AI help establish ethical frameworks and guiding principles that shape all other layers.Example actions/decisions/artifacts: Problem space and opportunity assessments, market positioning documents, long-term product roadmaps, comprehensive data strategy planning, user research findings on core needs, ethical frameworks and guiding principles, business model documentation, competitive/cooperative AI ecosystem mapping.Practical examples: tension points between layersTension point example 1: Bookmuse’s timeline troubles(Friction between Sessions and Skin)Bookmuse is a promising new AI tool for novelists. Samantha, a writer, tries it out while hashing out the underpinnings of her latest time-travel historical fiction thriller. The Bookmuse team planned for plenty of Samantha’s needs. At first, she considers Bookmuse a handy assistant. It supplements chats with tailored interactive visualizations that efficiently track character personalities, histories, relationships, and dramatic arcs.But Samantha is writing a story about time travelers interfering with World War I events, so she’s constantly juggling dates and timelines. Bookmuse falls short. It’s a tiny startup, and Luke, the harried cofounder who serves as a combination designer/researcher/product manager, hasn’t carved out any date-specific timeline tools or date calculators. He forgot to provide even a basic date picker in the design system.Problem: Bookmuse does its best to help Samantha with her story timeline (Sessions layer). But it lacks effective tools for the job (Skin layer). Its date and time interactions feel confusing, clumsy, and out of step with the rest of its tone, look, and feel. Whenever Samantha consults the timeline, it breaks her out of her creative flow.Constructive turbulence opportunities:a) Present feedback mechanisms that ensure this sort of “missing piece” event results in the product team learning about the type of interaction pothole that appeared — without revealing details or content that compromise Samantha’ privacy and her work. (For instance, a session tagging system can flag all interaction dead-ends during date choice interactions.)b) Improve timeline/date UI and interaction patterns. Table stakes: Standard industry-best-practice date picker components that suit Bookmuse’s style, tone, and voice. Game changers: Widgets, visualizations, and patterns tailored to the special time-tracking/exploration challenges that fiction writers often wrestle with.c) Update the core usability heuristics and universal interaction design patterns baked into the evaluation frameworks (in the Services layer), as part of regular eval reviews and updates. Result: When the team learns about a friction moment like this, they can prevent a host of future similar issues before they emerge.These improvements will make Bookmuse more resilient and useful.Tension point example 2: MedicalMind’s diagnostic dilemma(Friction between Services and Skeleton)Thousands of healthcare providers use MedicalMind, an AI-powered clinical decision support tool. Dr. Rina Patel, an internal medicine physician at a busy community hospital, relies on it to stay current with rapidly evolving medical research while managing her patient load.Thanks to a groundbreaking update, a MedicalMind AI model (Services layer) is familiar with new medical research data and can recognize newly discovered connections between previously unrelated symptoms across different medical specialties. For example, it identified patterns linking certain dermatological symptoms to early indicators of cardiovascular issues — connections not yet widely recognized in standard medical taxonomies.But MedicalMind’s information architecture (Skeleton layer) was tailored to traditional medical classification systems, so it’s organized by body system, conditions by specialty, and treatments by mechanism of action. The MedicalMind team constructed this structure based on how doctors were traditionally trained to approach medical knowledge.Problem: When Dr. Patel enters a patient’s constellation of symptoms (Sessions layer), MedicalMind’s AI can recognize potentially valuable cross-specialty patterns (Services layer). But these insights can’t be optimally organized and presented because the underlying information architecture (Skeleton layer) doesn’t easily accommodate the new findings and relationships. The AI either forces the insights into ill-fitting categories or presents them as disconnected “additional notes” that tend to be overlooked. That reduces their clinical utility and Dr. Patel’s trust in the system.Constructive turbulence opportunities:a) Create an “emerging patterns” framework within the information architecture (Skeleton layer) that can accommodate new AI-identified patterns in ways that augment, rather than disrupt, the familiar classification systems that doctors rely on.b) Design flexible visualization components and interaction patterns and styles (in the Skin layer) specifically for exploring, discussing, and documenting cross-category relationships. Let doctors toggle between traditional taxonomies and newer, AI-generated knowledge maps depending on their needs and comfort level.c) Implement a clinician feedback loop where specialists can validate and discuss new AI-surfaced relationships, gradually promoting validated patterns into the main classification system.These improvements will make MedicalMind more adaptive to emerging medical knowledge while maintaining the structural integrity that healthcare professionals rely on for critical decisions. This provides more efficient assistants for clinicians and better health for patients.Tension point example 3: ScienceSeeker’s hypothesis bottleneck(Friction between Skin and Services)ScienceSeeker is an AI research assistant used by scientists worldwide. Dr. Elena Rodriguez, a molecular biologist, uses it to investigate protein interactions for targeted cancer drug delivery.The AI engine (Services layer) recently gained the ability to generate sophisticated hypothesis trees with multiple competing explanations, track confidence levels for each branch, and identify which experiments would most efficiently disambiguate between theories. It can reason across scientific domains, connecting molecular biology with physics, chemistry, and computational modeling.But the interface (Skin layer) remains locked in a traditional chatbot paradigm — a single-threaded exchange with responses appearing sequentially in a scrolling window.Problem: The AI engine and the problem space are natively multithreaded and multimodal, but the UI is limited to single-threaded conversation. When Dr. Rodriguez inputs her experimental results (Sessions layer), the AI generates a rich, multidimensional analysis (Services layer), but must flatten this complex reasoning into linear text (Skin layer). Critical relationships between hypotheses become buried in paragraphs, probability comparisons are difficult, and the holistic picture of how variables influence multiple hypotheses is lost. Dr. Rodriguez resorts to taking screenshots and manually drawing diagrams to reconstruct the reasoning that the AI possesses but cannot visually express.Constructive turbulence opportunities:a) Develop an expandable, interactive, infinite-canvas “hypothesis tree” visualization (Skin) that helps the AI dynamically represent multiple competing explanations and their relationships. Scientists can interact with this to explore different branches spatially rather than sequentially.b) Create a dual-pane interface that maintains the chat for simple queries but provides the infinite canvas for complex reasoning, transitioning seamlessly based on response complexity.c) Implement collaborative, interactive node-based diagrams for multi-contributor experiment planning, where potential experiments appear as nodes showing how they would affect confidence in different hypothesis branches.This would transform ScienceSeeker’s limited text assistant into a scientific reasoning partner. It would help researchers visualize and interact with complex possibilities in ways that better fit how they tackle multidimensional problems.Navigating the future with AI Pace LayersAI Pace Layers offers product teams a new framework for seeing and shaping the bewildering structures and dynamics that power AI products.By recognizing the evolving layers and heeding and designing for their boundaries, AI design teams can:Transform tension points into constructive innovationAnticipate friction before it damages the product experienceGrow resilient and humane AI systems that absorb and integrate rapid technological change without losing sight of human needs.The framework’s value isn’t in rigid categorization, but in recognizing how components interact across timescales. For AI product teams, this awareness enables more thoughtful design choices that prevent destructive shearing that can tear apart an AI system.This framework is a work in progress, evolving alongside the AI landscape it describes.I’d love to hear from you, especially if you’ve built successful AI products and have insights on how this model could better reflect your experience. Please drop me a line or add a comment. Let’s develop more effective approaches to creating AI systems that enhance human potential while respecting human agency.Part of the Mindful AI Design series. Also see:The effort paradox in AI design: Why making things too easy can backfireBlack Mirror: “Override”. Dystopian storytelling for humane AI designStay updatedSubscribe to be notified when new articles in the series are published. Join our community of designers, product managers, founders and ethicists as we shape the future of mindful AI design.AI Pace Layers: a framework for resilient product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Commenti 0 condivisioni
  • Fear Street: Prom Queen Director Sees Franchise as the Next Halloween

    Anyone who has seen the first three movies in theFear Street series knows that they’re indebted to horror of the past. Fear Street: 1994 borrowed from Scream‘s self-awareness. Fear Street: 1978 had a masked killer stalking a campground, just like Friday the 13th: Part II. The folk horror films from the 1960s and ’70s provided a model for Fear Street: 1666.
    Fear Street: Prom Queen breaks from the original trilogy though by telling a standalone story. For Prom Queen writer and director Matt Palmer, that division brings to mind another horror forerunner.
    “I keep thinking about Halloween III: The Season of the Witch, which I quite like even though it didn’t go well with audiences in the 1980s,” Palmer tells Den of Geek. Halloween III famously ditched Michael Myers for a new story about cursed masks and Celtic witches. It was an attempt to turn the series into an anthology instead of the continuing adventures of a silent killer.
    “I like that idea of a Halloween franchise, a world where you could just tell vastly different stories form different subgenres. I think there’s potential for that to happen in Fear Street.”

    The latest entry in the movie adaptations of author R.L. Stine‘s teen novels, Prom Queen follows teen Lori Granger, who becomes an unlikely prom queen favorite when a masked killer starts offing the competition. In addition to young stars such as Fowler, Suzanna Son, and Ariana Greenblatt, Prom Queen also features an impressive adult cast, which includes Lili Taylor, Katherine Waterston, and 2000s mainstay Chris Klein.
    But the most surprising name in the credits is Palmer’s, and not just because he takes the place of Leigh Janiak, who directed the first three films. Palmer’s debut Calibre focused on two Scottish young adults whose friendship is tested when a hunting trip goes horribly wrong. It’s rural and thoughtful, and a million miles from the 1980s American setting of Prom Queen. But to Palmer, the two films both feel complementary.
    “I guess there’s two of me,” he says, thoughtfully. “There’s one side of me that’s into heavier movies and quite intense thrillers in the Deliverance mold. But horror was my first love in terms of genre. I do a festival at an all-night horror event in the UK once a year. We show five horror movies from the ’70s and ’80s, all the way through the night.”
    That experience makes Prom Queen “a dream project” for Palmer, “making a movie that could potentially fit in with the ’80s movies I show at my event.”
    While there’s plenty of ’80s influence in Prom Queen, Palmer also adds elements of giallo, the lurid Italian murder mysteries that were precursors to the American slashers, especially with the look of the central masked killer.
    “I really liked the black leather raincoat and the black gloves in gialli, so I started from that,” says Palmer of his process of designing the killer. “Then our concept artist said the killer can’t be all black because they’ll get lost in the dark. So we started looking at a yellow coat but that felt a bit too much like Alice Sweet Alice—which was a visual influence on the movie. We looked at blue and then the concept artist announced red on red, and we were like ‘boom! Yeah!'”

    Palmer brought a similar level of erudition to designing what many would consider the most important part of a slasher: the over the top kills.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    “One of the things I’ve noticed about modern slashers is that they sometimes don’t have wide shots in the kill scenes. I think that’s a mistake because I want the audience to understand the space where the kill is going to happen, and then you can start cutting into closer shots. Because then they can compute from that wide shot where people are. It makes the scene more frightening because you know where the dark spots are and you know how big the room is.”
    As academic as that approach may sound, Palmer’s careful to keep focused on the main thing, the blood and guts that audiences expect. “We shot all of the kill scenes in one day,” Plamer reveals. “Some of them have a lot of shots, so they were heavily storyboarded,” meaning that Palmer and his team made comic book style drawings of every shot in the sequence, so they could shoot them more efficiently.
    “There was a funny moment when we were storyboarding one of our kills and we got really excited about the lighting, because it’s somebody moving through different planes of lighting and you can see certain things. The storyboard artists and the director of photographystarted talking about this mist and the light, and it started turning into a real art film geek conversation, all about the mystery. Then the storyboard artist turns to me and asks, ‘so what happens next?'” and I said, “and then all her guts fall out.’ Lets’s not forget what kind of movie we were really making.”
    While that might sound like he’s committed to making a lean and mean slasher, and he did emphasize the fact that he wanted the film to come in under 90 minutes, Palmer does find surprising moments of stillness in Prom Queen.
    “I didn’t realize this until after Calibre, but I give scenes a bit of breathing space so you can be with the characters and go a bit deeper with them. But then in between scenes, the escalation of plot is quite steep.”

    He adds, “I prefer movies that are a bit more sedately paced, but sometimes I’m watching a movie from the ’80s and wondering, ‘Why are we holding here? Cut, cut, cut! I’ve got the information, so move on!’ People assimilate information faster now, so I’m trying to find that sweet spot where you can still have that breathing room to go a little bit deeper with the characters, but also be aware that people need things these days to move a bit quicker.”
    Palmer’s awareness of both classic horror and modern audiences make him a perfect choice for the Fear Street franchise, which has a huge audience among early teens, newcomers to the genre.
    “I feel like the characters should be youngsters and the focus should be on the younger characters,” Palmer explains. “I went to my first all-night horror event when I was 16. I was underage and it was the most exciting thing, and I think that’s the genesis of my process. I asked myself what kind of movie I would have wanted to see when I was 15 and tried to go back and capture a bit of that magic.”
    For the other big audience of Fear Street, Palmer had to go beyond himself and get some outside help. “I think there’s also a skew towards the female in Fear Street’s following, so we all wanted to have a female-led story. That was obviously a challenge for me because, you know, I’m male. Fortunately, I had really strong female producers on this to guide me if I went astray on any of the characterizations.”
    After seeing Prom Queen, most will agree that Palmer didn’t go astray in any regard, which raises some questions. Prom Queen may be a one-off, but does Palmer have more to say within the world of the series?
    “Well, I’ve had my dream project in the franchise, so I don’t want to be greedy. But If I was going to do another one, it would probably take place a couple of years later in the ’80s and be a Satanic Panic thing with ouija boards.”

    Palmer trails off here, not wanting to get ahead of myself. “But I’ve already had my Fear Street adventure,” he says with a smile and gesturing back to the Halloween-style anthology that he wants the franchise to become. Still, if Prom Queen hits with fans as well as the other Fear Street movies, it’s hard to imagine that we won’t Palmer making his Satanic Panic movie soon.
    Fear Street: Prom Queen arrives on Netflix on May 23, 2025.
    #fear #street #prom #queen #director
    Fear Street: Prom Queen Director Sees Franchise as the Next Halloween
    Anyone who has seen the first three movies in theFear Street series knows that they’re indebted to horror of the past. Fear Street: 1994 borrowed from Scream‘s self-awareness. Fear Street: 1978 had a masked killer stalking a campground, just like Friday the 13th: Part II. The folk horror films from the 1960s and ’70s provided a model for Fear Street: 1666. Fear Street: Prom Queen breaks from the original trilogy though by telling a standalone story. For Prom Queen writer and director Matt Palmer, that division brings to mind another horror forerunner. “I keep thinking about Halloween III: The Season of the Witch, which I quite like even though it didn’t go well with audiences in the 1980s,” Palmer tells Den of Geek. Halloween III famously ditched Michael Myers for a new story about cursed masks and Celtic witches. It was an attempt to turn the series into an anthology instead of the continuing adventures of a silent killer. “I like that idea of a Halloween franchise, a world where you could just tell vastly different stories form different subgenres. I think there’s potential for that to happen in Fear Street.” The latest entry in the movie adaptations of author R.L. Stine‘s teen novels, Prom Queen follows teen Lori Granger, who becomes an unlikely prom queen favorite when a masked killer starts offing the competition. In addition to young stars such as Fowler, Suzanna Son, and Ariana Greenblatt, Prom Queen also features an impressive adult cast, which includes Lili Taylor, Katherine Waterston, and 2000s mainstay Chris Klein. But the most surprising name in the credits is Palmer’s, and not just because he takes the place of Leigh Janiak, who directed the first three films. Palmer’s debut Calibre focused on two Scottish young adults whose friendship is tested when a hunting trip goes horribly wrong. It’s rural and thoughtful, and a million miles from the 1980s American setting of Prom Queen. But to Palmer, the two films both feel complementary. “I guess there’s two of me,” he says, thoughtfully. “There’s one side of me that’s into heavier movies and quite intense thrillers in the Deliverance mold. But horror was my first love in terms of genre. I do a festival at an all-night horror event in the UK once a year. We show five horror movies from the ’70s and ’80s, all the way through the night.” That experience makes Prom Queen “a dream project” for Palmer, “making a movie that could potentially fit in with the ’80s movies I show at my event.” While there’s plenty of ’80s influence in Prom Queen, Palmer also adds elements of giallo, the lurid Italian murder mysteries that were precursors to the American slashers, especially with the look of the central masked killer. “I really liked the black leather raincoat and the black gloves in gialli, so I started from that,” says Palmer of his process of designing the killer. “Then our concept artist said the killer can’t be all black because they’ll get lost in the dark. So we started looking at a yellow coat but that felt a bit too much like Alice Sweet Alice—which was a visual influence on the movie. We looked at blue and then the concept artist announced red on red, and we were like ‘boom! Yeah!'” Palmer brought a similar level of erudition to designing what many would consider the most important part of a slasher: the over the top kills. Join our mailing list Get the best of Den of Geek delivered right to your inbox! “One of the things I’ve noticed about modern slashers is that they sometimes don’t have wide shots in the kill scenes. I think that’s a mistake because I want the audience to understand the space where the kill is going to happen, and then you can start cutting into closer shots. Because then they can compute from that wide shot where people are. It makes the scene more frightening because you know where the dark spots are and you know how big the room is.” As academic as that approach may sound, Palmer’s careful to keep focused on the main thing, the blood and guts that audiences expect. “We shot all of the kill scenes in one day,” Plamer reveals. “Some of them have a lot of shots, so they were heavily storyboarded,” meaning that Palmer and his team made comic book style drawings of every shot in the sequence, so they could shoot them more efficiently. “There was a funny moment when we were storyboarding one of our kills and we got really excited about the lighting, because it’s somebody moving through different planes of lighting and you can see certain things. The storyboard artists and the director of photographystarted talking about this mist and the light, and it started turning into a real art film geek conversation, all about the mystery. Then the storyboard artist turns to me and asks, ‘so what happens next?'” and I said, “and then all her guts fall out.’ Lets’s not forget what kind of movie we were really making.” While that might sound like he’s committed to making a lean and mean slasher, and he did emphasize the fact that he wanted the film to come in under 90 minutes, Palmer does find surprising moments of stillness in Prom Queen. “I didn’t realize this until after Calibre, but I give scenes a bit of breathing space so you can be with the characters and go a bit deeper with them. But then in between scenes, the escalation of plot is quite steep.” He adds, “I prefer movies that are a bit more sedately paced, but sometimes I’m watching a movie from the ’80s and wondering, ‘Why are we holding here? Cut, cut, cut! I’ve got the information, so move on!’ People assimilate information faster now, so I’m trying to find that sweet spot where you can still have that breathing room to go a little bit deeper with the characters, but also be aware that people need things these days to move a bit quicker.” Palmer’s awareness of both classic horror and modern audiences make him a perfect choice for the Fear Street franchise, which has a huge audience among early teens, newcomers to the genre. “I feel like the characters should be youngsters and the focus should be on the younger characters,” Palmer explains. “I went to my first all-night horror event when I was 16. I was underage and it was the most exciting thing, and I think that’s the genesis of my process. I asked myself what kind of movie I would have wanted to see when I was 15 and tried to go back and capture a bit of that magic.” For the other big audience of Fear Street, Palmer had to go beyond himself and get some outside help. “I think there’s also a skew towards the female in Fear Street’s following, so we all wanted to have a female-led story. That was obviously a challenge for me because, you know, I’m male. Fortunately, I had really strong female producers on this to guide me if I went astray on any of the characterizations.” After seeing Prom Queen, most will agree that Palmer didn’t go astray in any regard, which raises some questions. Prom Queen may be a one-off, but does Palmer have more to say within the world of the series? “Well, I’ve had my dream project in the franchise, so I don’t want to be greedy. But If I was going to do another one, it would probably take place a couple of years later in the ’80s and be a Satanic Panic thing with ouija boards.” Palmer trails off here, not wanting to get ahead of myself. “But I’ve already had my Fear Street adventure,” he says with a smile and gesturing back to the Halloween-style anthology that he wants the franchise to become. Still, if Prom Queen hits with fans as well as the other Fear Street movies, it’s hard to imagine that we won’t Palmer making his Satanic Panic movie soon. Fear Street: Prom Queen arrives on Netflix on May 23, 2025. #fear #street #prom #queen #director
    WWW.DENOFGEEK.COM
    Fear Street: Prom Queen Director Sees Franchise as the Next Halloween
    Anyone who has seen the first three movies in theFear Street series knows that they’re indebted to horror of the past. Fear Street: 1994 borrowed from Scream‘s self-awareness. Fear Street: 1978 had a masked killer stalking a campground, just like Friday the 13th: Part II. The folk horror films from the 1960s and ’70s provided a model for Fear Street: 1666. Fear Street: Prom Queen breaks from the original trilogy though by telling a standalone story. For Prom Queen writer and director Matt Palmer, that division brings to mind another horror forerunner. “I keep thinking about Halloween III: The Season of the Witch, which I quite like even though it didn’t go well with audiences in the 1980s,” Palmer tells Den of Geek. Halloween III famously ditched Michael Myers for a new story about cursed masks and Celtic witches. It was an attempt to turn the series into an anthology instead of the continuing adventures of a silent killer. “I like that idea of a Halloween franchise, a world where you could just tell vastly different stories form different subgenres. I think there’s potential for that to happen in Fear Street.” The latest entry in the movie adaptations of author R.L. Stine‘s teen novels, Prom Queen follows teen Lori Granger (India Fowler), who becomes an unlikely prom queen favorite when a masked killer starts offing the competition. In addition to young stars such as Fowler, Suzanna Son, and Ariana Greenblatt, Prom Queen also features an impressive adult cast, which includes Lili Taylor, Katherine Waterston, and 2000s mainstay Chris Klein. But the most surprising name in the credits is Palmer’s, and not just because he takes the place of Leigh Janiak, who directed the first three films. Palmer’s debut Calibre focused on two Scottish young adults whose friendship is tested when a hunting trip goes horribly wrong. It’s rural and thoughtful, and a million miles from the 1980s American setting of Prom Queen. But to Palmer, the two films both feel complementary. “I guess there’s two of me,” he says, thoughtfully. “There’s one side of me that’s into heavier movies and quite intense thrillers in the Deliverance mold. But horror was my first love in terms of genre. I do a festival at an all-night horror event in the UK once a year. We show five horror movies from the ’70s and ’80s, all the way through the night.” That experience makes Prom Queen “a dream project” for Palmer, “making a movie that could potentially fit in with the ’80s movies I show at my event.” While there’s plenty of ’80s influence in Prom Queen, Palmer also adds elements of giallo, the lurid Italian murder mysteries that were precursors to the American slashers, especially with the look of the central masked killer. “I really liked the black leather raincoat and the black gloves in gialli, so I started from that,” says Palmer of his process of designing the killer. “Then our concept artist said the killer can’t be all black because they’ll get lost in the dark. So we started looking at a yellow coat but that felt a bit too much like Alice Sweet Alice—which was a visual influence on the movie. We looked at blue and then the concept artist announced red on red, and we were like ‘boom! Yeah!'” Palmer brought a similar level of erudition to designing what many would consider the most important part of a slasher: the over the top kills. Join our mailing list Get the best of Den of Geek delivered right to your inbox! “One of the things I’ve noticed about modern slashers is that they sometimes don’t have wide shots in the kill scenes. I think that’s a mistake because I want the audience to understand the space where the kill is going to happen, and then you can start cutting into closer shots. Because then they can compute from that wide shot where people are. It makes the scene more frightening because you know where the dark spots are and you know how big the room is.” As academic as that approach may sound, Palmer’s careful to keep focused on the main thing, the blood and guts that audiences expect. “We shot all of the kill scenes in one day,” Plamer reveals. “Some of them have a lot of shots, so they were heavily storyboarded,” meaning that Palmer and his team made comic book style drawings of every shot in the sequence, so they could shoot them more efficiently. “There was a funny moment when we were storyboarding one of our kills and we got really excited about the lighting, because it’s somebody moving through different planes of lighting and you can see certain things. The storyboard artists and the director of photography [Márk Gyõri] started talking about this mist and the light, and it started turning into a real art film geek conversation, all about the mystery. Then the storyboard artist turns to me and asks, ‘so what happens next?'” and I said, “and then all her guts fall out.’ Lets’s not forget what kind of movie we were really making.” While that might sound like he’s committed to making a lean and mean slasher, and he did emphasize the fact that he wanted the film to come in under 90 minutes, Palmer does find surprising moments of stillness in Prom Queen. “I didn’t realize this until after Calibre, but I give scenes a bit of breathing space so you can be with the characters and go a bit deeper with them. But then in between scenes, the escalation of plot is quite steep.” He adds, “I prefer movies that are a bit more sedately paced, but sometimes I’m watching a movie from the ’80s and wondering, ‘Why are we holding here? Cut, cut, cut! I’ve got the information, so move on!’ People assimilate information faster now, so I’m trying to find that sweet spot where you can still have that breathing room to go a little bit deeper with the characters, but also be aware that people need things these days to move a bit quicker.” Palmer’s awareness of both classic horror and modern audiences make him a perfect choice for the Fear Street franchise, which has a huge audience among early teens, newcomers to the genre. “I feel like the characters should be youngsters and the focus should be on the younger characters,” Palmer explains. “I went to my first all-night horror event when I was 16. I was underage and it was the most exciting thing, and I think that’s the genesis of my process. I asked myself what kind of movie I would have wanted to see when I was 15 and tried to go back and capture a bit of that magic.” For the other big audience of Fear Street, Palmer had to go beyond himself and get some outside help. “I think there’s also a skew towards the female in Fear Street’s following, so we all wanted to have a female-led story. That was obviously a challenge for me because, you know, I’m male. Fortunately, I had really strong female producers on this to guide me if I went astray on any of the characterizations.” After seeing Prom Queen, most will agree that Palmer didn’t go astray in any regard, which raises some questions. Prom Queen may be a one-off, but does Palmer have more to say within the world of the series? “Well, I’ve had my dream project in the franchise, so I don’t want to be greedy. But If I was going to do another one, it would probably take place a couple of years later in the ’80s and be a Satanic Panic thing with ouija boards.” Palmer trails off here, not wanting to get ahead of myself. “But I’ve already had my Fear Street adventure,” he says with a smile and gesturing back to the Halloween-style anthology that he wants the franchise to become. Still, if Prom Queen hits with fans as well as the other Fear Street movies, it’s hard to imagine that we won’t Palmer making his Satanic Panic movie soon. Fear Street: Prom Queen arrives on Netflix on May 23, 2025.
    0 Commenti 0 condivisioni
  • The first teeth were sensory organs on the skin of ancient fish

    CT scan of the front of a skate, showing the hard, tooth-like denticleson its skinYara Haridy
    Teeth first evolved as sensory organs, not for chewing, according to a new analysis of animal fossils. The first tooth-like structures seem to have been sensitive nodules on the skin of early fish that could detect changes in the surrounding water.
    The finding supports a long-standing idea that teeth first evolved outside the mouth, says Yara Haridy at the University of Chicago.
    Advertisement
    While there was some evidence to back this up, there was an obvious question. “What good is having all these teeth on the outside?” says Haridy. One possibility was that they served as defensive armour, but Haridy thinks there was more to it. “It’s great to cover yourself in hard things, but what if those hard things could also help you sense your environment?”
    True teeth are only found in backboned vertebrates, like fish and mammals. Some invertebrates have tooth-like structures, but the underlying tissues are completely different. This means teeth originated during the evolution of the earliest vertebrates: fish.
    Haridy and her team re-examined fossils that have been claimed to be the oldest examples of fish teeth, using a synchrotron to scan them in unprecedented detail.

    Unmissable news about our planet delivered straight to your inbox every month.

    Sign up to newsletter

    They focused first on fragmentary fossils of animals called Anatolepis, which date from the later part of the Cambrian Period, which ran from 539 million to 487 million years ago, and early in the Ordovician Period, which ran from 487 million to 443 million years ago. These animals had a hard exoskeleton, dotted with tubules.
    These had been interpreted as being tubules of dentine, one of the hard tissues that make up teeth. In human teeth, dentine is the yellow layer under the hard white enamel and it performs many functions, including sensing pressure, temperature and pain.
    This led to the idea that the tubules are precursors to teeth called odontodes and that Anatolepis is an early fish.
    That isn’t what Haridy and her team found. “We saw that the internal anatomydidn’t actually look like a vertebrate at all,” she says. After examining structures from a range of animals, they found that the tubules were most similar to features called sensilla found on the exoskeletons of arthropods like insects and spiders. These look like pegs or small hairs and detect a range of phenomena. “It can be everything from taste to vibration to changes in air currents,” says Haridy.
    This means Anatolepis is an arthropod, not a fish, and its tubules aren’t the direct precursors to teeth.

    “Dentine is likely a vertebrate novelty, yet the sensory capabilities of a hardened external surface were present much earlier in invertebrates,” says Gareth Fraser at the University of Florida in Gainesville, who wasn’t involved in the study.
    With Anatolepis out of the picture, the team says, the oldest known teeth are those of Eriptychius, which is only known from the Ordovician Period. These do have true dentine – in odontodes on their skin.
    Haridy says invertebrates like Anatolepis and early vertebrates like Eriptychius independently evolved hard, sensory nodules on their skin. “These two very different animals needed to sense their way through the muck of ancient seas,” she says. In line with this, the team found that the odontodes on the skin of some modern fish still have nerves – suggesting a sensory function.
    Once some fish became active predators, they needed a way to hold onto their prey, so the hard odontodes made their way to the mouth, where they could be used to bite.
    “Based on the available data, tooth-like structures likely first evolved in the skin of early vertebrates, prior to the oral invasion of these structures that became teeth,” says Fraser.
    Journal reference:Nature DOI: 10.1038/s41586-025-08944-w
    Topics:
    #first #teeth #were #sensory #organs
    The first teeth were sensory organs on the skin of ancient fish
    CT scan of the front of a skate, showing the hard, tooth-like denticleson its skinYara Haridy Teeth first evolved as sensory organs, not for chewing, according to a new analysis of animal fossils. The first tooth-like structures seem to have been sensitive nodules on the skin of early fish that could detect changes in the surrounding water. The finding supports a long-standing idea that teeth first evolved outside the mouth, says Yara Haridy at the University of Chicago. Advertisement While there was some evidence to back this up, there was an obvious question. “What good is having all these teeth on the outside?” says Haridy. One possibility was that they served as defensive armour, but Haridy thinks there was more to it. “It’s great to cover yourself in hard things, but what if those hard things could also help you sense your environment?” True teeth are only found in backboned vertebrates, like fish and mammals. Some invertebrates have tooth-like structures, but the underlying tissues are completely different. This means teeth originated during the evolution of the earliest vertebrates: fish. Haridy and her team re-examined fossils that have been claimed to be the oldest examples of fish teeth, using a synchrotron to scan them in unprecedented detail. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter They focused first on fragmentary fossils of animals called Anatolepis, which date from the later part of the Cambrian Period, which ran from 539 million to 487 million years ago, and early in the Ordovician Period, which ran from 487 million to 443 million years ago. These animals had a hard exoskeleton, dotted with tubules. These had been interpreted as being tubules of dentine, one of the hard tissues that make up teeth. In human teeth, dentine is the yellow layer under the hard white enamel and it performs many functions, including sensing pressure, temperature and pain. This led to the idea that the tubules are precursors to teeth called odontodes and that Anatolepis is an early fish. That isn’t what Haridy and her team found. “We saw that the internal anatomydidn’t actually look like a vertebrate at all,” she says. After examining structures from a range of animals, they found that the tubules were most similar to features called sensilla found on the exoskeletons of arthropods like insects and spiders. These look like pegs or small hairs and detect a range of phenomena. “It can be everything from taste to vibration to changes in air currents,” says Haridy. This means Anatolepis is an arthropod, not a fish, and its tubules aren’t the direct precursors to teeth. “Dentine is likely a vertebrate novelty, yet the sensory capabilities of a hardened external surface were present much earlier in invertebrates,” says Gareth Fraser at the University of Florida in Gainesville, who wasn’t involved in the study. With Anatolepis out of the picture, the team says, the oldest known teeth are those of Eriptychius, which is only known from the Ordovician Period. These do have true dentine – in odontodes on their skin. Haridy says invertebrates like Anatolepis and early vertebrates like Eriptychius independently evolved hard, sensory nodules on their skin. “These two very different animals needed to sense their way through the muck of ancient seas,” she says. In line with this, the team found that the odontodes on the skin of some modern fish still have nerves – suggesting a sensory function. Once some fish became active predators, they needed a way to hold onto their prey, so the hard odontodes made their way to the mouth, where they could be used to bite. “Based on the available data, tooth-like structures likely first evolved in the skin of early vertebrates, prior to the oral invasion of these structures that became teeth,” says Fraser. Journal reference:Nature DOI: 10.1038/s41586-025-08944-w Topics: #first #teeth #were #sensory #organs
    WWW.NEWSCIENTIST.COM
    The first teeth were sensory organs on the skin of ancient fish
    CT scan of the front of a skate, showing the hard, tooth-like denticles (orange) on its skinYara Haridy Teeth first evolved as sensory organs, not for chewing, according to a new analysis of animal fossils. The first tooth-like structures seem to have been sensitive nodules on the skin of early fish that could detect changes in the surrounding water. The finding supports a long-standing idea that teeth first evolved outside the mouth, says Yara Haridy at the University of Chicago. Advertisement While there was some evidence to back this up, there was an obvious question. “What good is having all these teeth on the outside?” says Haridy. One possibility was that they served as defensive armour, but Haridy thinks there was more to it. “It’s great to cover yourself in hard things, but what if those hard things could also help you sense your environment?” True teeth are only found in backboned vertebrates, like fish and mammals. Some invertebrates have tooth-like structures, but the underlying tissues are completely different. This means teeth originated during the evolution of the earliest vertebrates: fish. Haridy and her team re-examined fossils that have been claimed to be the oldest examples of fish teeth, using a synchrotron to scan them in unprecedented detail. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter They focused first on fragmentary fossils of animals called Anatolepis, which date from the later part of the Cambrian Period, which ran from 539 million to 487 million years ago, and early in the Ordovician Period, which ran from 487 million to 443 million years ago. These animals had a hard exoskeleton, dotted with tubules. These had been interpreted as being tubules of dentine, one of the hard tissues that make up teeth. In human teeth, dentine is the yellow layer under the hard white enamel and it performs many functions, including sensing pressure, temperature and pain. This led to the idea that the tubules are precursors to teeth called odontodes and that Anatolepis is an early fish. That isn’t what Haridy and her team found. “We saw that the internal anatomy [of the tubules] didn’t actually look like a vertebrate at all,” she says. After examining structures from a range of animals, they found that the tubules were most similar to features called sensilla found on the exoskeletons of arthropods like insects and spiders. These look like pegs or small hairs and detect a range of phenomena. “It can be everything from taste to vibration to changes in air currents,” says Haridy. This means Anatolepis is an arthropod, not a fish, and its tubules aren’t the direct precursors to teeth. “Dentine is likely a vertebrate novelty, yet the sensory capabilities of a hardened external surface were present much earlier in invertebrates,” says Gareth Fraser at the University of Florida in Gainesville, who wasn’t involved in the study. With Anatolepis out of the picture, the team says, the oldest known teeth are those of Eriptychius, which is only known from the Ordovician Period. These do have true dentine – in odontodes on their skin. Haridy says invertebrates like Anatolepis and early vertebrates like Eriptychius independently evolved hard, sensory nodules on their skin. “These two very different animals needed to sense their way through the muck of ancient seas,” she says. In line with this, the team found that the odontodes on the skin of some modern fish still have nerves – suggesting a sensory function. Once some fish became active predators, they needed a way to hold onto their prey, so the hard odontodes made their way to the mouth, where they could be used to bite. “Based on the available data, tooth-like structures likely first evolved in the skin of early vertebrates, prior to the oral invasion of these structures that became teeth,” says Fraser. Journal reference:Nature DOI: 10.1038/s41586-025-08944-w Topics:
    0 Commenti 0 condivisioni