• Decoding The SVG <code>path</code> Element: Line Commands

    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon.
    This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs.
    The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn.
    Required Knowledge And Guide Structure
    Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples.
    Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too.
    The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open.
    To keep this all framework-agnostic, the code is written in vanilla JavaScript.
    Setting Up For Success
    As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100.
    The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H.
    It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V.
    If we compare the resulting horizontal path with the same implementation in a <line> element, we may

    Notice how much more efficient path can be, and
    Remove quite a bit of meaning for anyone who doesn’t speak path.

    Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads.
    <path d="M 10 55 H 100" />
    <line x1="10" y1="55" x2="100" y2="55" />

    Making Polygons And Polylines With Z
    In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon.
    Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command.

    const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y};
    const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z;

    So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat!
    See the Pen Alternating Trianglesby Myriam.
    When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element.
    <path d="M0 0 L86.6 50 L0 100 Z" />
    <polygon points="0,0 86.6,50 0,100" />

    <path d="M0 0 L86.6 50 L0 100" />
    <polyline points="0,0 86.6,50 0,100" />

    Relative Commands: m, l, h, v
    All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving.
    Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand.
    const lines =;

    As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two.
    And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing.
    Now, you might be surprised, but they all draw the same shape, just in different places.
    See the Pen SVG Compound Pathsby Myriam.
    So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse.
    And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point.

    Jumping Points: How To Make Compound Paths
    Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code.
    I snuck in a grid drawing update.
    With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group.
    It looks like this, which is not fun to look at but holds the secret to how it’s possible:

    <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path>

    If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths.
    Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps.

    So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter.
    Coming Up Next
    Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article.
    Further Reading On SmashingMag

    “Mastering SVG Arcs,” Akshay Gupta
    “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher
    “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece
    “Magical SVG Techniques,” Cosima Mielke
    #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon. This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript. Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100. The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Trianglesby Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines =; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Pathsby Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    SMASHINGMAGAZINE.COM
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon (and also g). This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded (unless you’re the type of creative who makes non-logical visual art in code — then go forth and create awe-inspiring wonders; you’re probably not the audience of this article). But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements (rect, line, g, and so on), I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript (though, really, TypeScript is your friend the more complicated your SVG becomes, and I missed it when writing some of these). Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner (so, values in the shape of 0 0 ${width} ${height}. I added text labels as well to make it easier to point you to specific areas within the grid. Please note that I recommend being careful when adding text within the <text> element in SVG if you want your text to be accessible. If the graphic relies on text scaling like the rest of your website, it would be better to have it rendered through HTML. But for our examples here, it should be sufficient. So, this is what we’ll be plotting on top of: See the Pen SVG Viewbox Grid Visual [forked] by Myriam. Alright, we now have a ViewBox Visualizing Grid. I think we’re ready for our first session with the beast. Enter path And The All-Powerful d Attribute The <path> element has a d attribute, which speaks its own language. So, within d, you’re talking in terms of “commands”. When I think of non-path versus path elements, I like to think that the reason why we have to write much more complex drawing instructions is this: All non-path elements are just dumber paths. In the background, they have one pre-drawn path shape that they will always render based on a few parameters you pass in. But path has no default shape. The shape logic has to be exposed to you, while it can be neatly hidden away for all other elements. Let’s learn about those commands. Where It All Begins: M The first, which is where each path begins, is the M command, which moves the pen to a point. This command places your starting point, but it does not draw a single thing. A path with just an M command is an auto-delete when cleaning up SVG files. It takes two arguments: the x and y coordinates of your start position. const uselessPathCommand = `M${start.x} ${start.y}`; Basic Line Commands: M , L, H, V These are fun and easy: L, H, and V, all draw a line from the current point to the point specified. L takes two arguments, the x and y positions of the point you want to draw to. const pathCommandL = `M${start.x} ${start.y} L${end.x} ${end.y}`; H and V, on the other hand, only take one argument because they are only drawing a line in one direction. For H, you specify the x position, and for V, you specify the y position. The other value is implied. const pathCommandH = `M${start.x} ${start.y} H${end.x}`; const pathCommandV = `M${start.x} ${start.y} V${end.y}`; To visualize how this works, I created a function that draws the path, as well as points with labels on them, so we can see what happens. See the Pen Simple Lines with path [forked] by Myriam. We have three lines in that image. The L command is used for the red path. It starts with M at (10,10), then moves diagonally down to (100,100). The command is: M10 10 L100 100. The blue line is horizontal. It starts at (10,55) and should end at (100, 55). We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Triangles [forked] by Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is (and probably even fewer know what a polyline is. Heck, even the program I’m writing this article in tells me polyline is not a valid word). The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines = [ { d: `M10 10 L 10 30 L 30 30`, color: "var(--_red)" }, { d: `M40 10 l 0 20 l 20 0`, color: "var(--_blue)" }, { d: `M70 10 l 0 20 L 90 30`, color: "var(--_green)" } ]; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Paths [forked] by Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke
    0 Yorumlar 0 hisse senetleri
  • Smashing Animations Part 4: Optimising SVGs

    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations.
    But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it.

    Start Clean And Design With Optimisation In Mind
    Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths.
    Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks.

    Beginning With Outlines
    For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference.

    Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them.

    Drawing Simple Background Shapes
    With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size.

    Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes.

    Optimising The Code
    It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size.

    Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs.

    Layering SVG Elements
    My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic.

    Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future:
    <svg ...>
    <defs>
    <!-- ... -->
    </defs>
    <path fill="url" d="…"/>
    <!-- TITLE GRAPHIC -->
    <g>
    <path … />
    <!-- ... -->
    </g>
    </svg>

    Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers:
    <svg ...>
    <defs>
    <linearGradient id="grad" …>…</linearGradient>
    <filter id="trail" …>…</filter>
    </defs>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <path filter="url" …/>
    <!-- TITLE GRAPHIC -->
    </svg>

    Then come the magical stars, added in the same sequential fashion:
    <svg ...>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <!-- STARS -->
    <!-- TITLE GRAPHIC -->
    </svg>

    To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi:
    <g id="yogi">...</g>

    Then I build Yogi from the ground up — starting with background props, like his broom:
    <g id="broom">...</g>

    Followed by grouped elements for his body, head, collar, and tie:
    <g id="yogi">
    <g id="broom">…</g>
    <g id="body">…</g>
    <g id="head">…</g>
    <g id="collar">…</g>
    <g id="tie">…</g>
    </g>

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify.
    Reusing Elements With <use>
    When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB:
    <g id="stars">
    <path class="star-small" fill="#eae3da" d="..."/>
    <path class="star-medium" fill="#eae3da" d="..."/>
    <path class="star-large" fill="#eae3da" d="..."/>
    <!-- ... -->
    </g>

    Moving the stars’ fill attribute values to their parent group reduces the overall weight a little:
    <g id="stars" fill="#eae3da">
    <path class="star-small" d="…"/>
    <path class="star-medium" d="…"/>
    <path class="star-large" d="…"/>
    <!-- ... -->
    </g>

    But a more efficient and manageable option is to define each star size as a reusable template:

    <defs>
    <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/>
    </defs>

    With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes:
    <g id="stars">
    <!-- Large stars -->
    <use href="#star-large" x="1575" y="495"/>
    <!-- ... -->
    <!-- Medium stars -->
    <use href="#star-medium" x="1453" y="696"/>
    <!-- ... -->
    <!-- Small stars -->
    <use href="#star-small" x="1287" y="741"/>
    <!-- ... -->
    </g>

    This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance.
    Adding Animations
    The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels:
    @keyframes sparkle {
    0%, 100% { opacity: .1; }
    50% { opacity: 1; }
    }

    Next, I applied this looping animation to every use element inside my stars group:
    #stars use {
    animation: sparkle 10s ease-in-out infinite;
    }

    The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects:
    /* Fast, frequent */
    #stars use:nth-child:nth-child{
    animation-delay: .1s;
    animation-duration: 2s;
    }

    From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses:
    /* Medium */
    #stars use:nth-child:nth-child{ ... }

    /* Slow, dramatic */
    #stars use:nth-child:nth-child{ ... }

    /* Random */
    #stars use:nth-child{ ... }

    /* Alternating */
    #stars use:nth-child{ ... }

    /* Scattered */
    #stars use:nth-child{ ... }

    By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle.

    Then, for added realism, I make Yogi’s head wobble:

    @keyframes headWobble {
    0% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateY; }
    }

    #head {
    animation: headWobble 0.8s cubic-bezierinfinite alternate;
    }

    His tie waves:

    @keyframes tieWave {
    0%, 100% { transform: rotateZrotateYscaleX; }
    33% { transform: rotateZrotateYscaleX; }
    66% { transform: rotateZrotateYscaleX; }
    }

    #tie {
    transform-style: preserve-3d;
    animation: tieWave 10s cubic-bezierinfinite;
    }

    His broom swings:

    @keyframes broomSwing {
    0%, 20% { transform: rotate; }
    30% { transform: rotate; }
    50%, 70% { transform: rotate; }
    80% { transform: rotate; }
    100% { transform: rotate; }
    }

    #broom {
    animation: broomSwing 4s cubic-bezierinfinite;
    }

    And, finally, Yogi himself gently rotates as he flies on his magical broom:

    @keyframes yogiWobble {
    0% { transform: rotatetranslateYscale; }
    30% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateYscale; }
    }

    #yogi {
    animation: yogiWobble 3.5s cubic-bezierinfinite alternate;
    }

    All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript.
    Try this yourself:
    See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke.
    Conclusion
    Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same:

    Start clean,
    Optimise early, and
    Structure everything with animation in mind.

    SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    #smashing #animations #part #optimising #svgs
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child:nth-child{ animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child:nth-child{ ... } /* Slow, dramatic */ #stars use:nth-child:nth-child{ ... } /* Random */ #stars use:nth-child{ ... } /* Alternating */ #stars use:nth-child{ ... } /* Scattered */ #stars use:nth-child{ ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotatetranslateY; } 100% { transform: rotatetranslateY; } } #head { animation: headWobble 0.8s cubic-bezierinfinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZrotateYscaleX; } 33% { transform: rotateZrotateYscaleX; } 66% { transform: rotateZrotateYscaleX; } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezierinfinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate; } 30% { transform: rotate; } 50%, 70% { transform: rotate; } 80% { transform: rotate; } 100% { transform: rotate; } } #broom { animation: broomSwing 4s cubic-bezierinfinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotatetranslateYscale; } 30% { transform: rotatetranslateY; } 100% { transform: rotatetranslateYscale; } } #yogi { animation: yogiWobble 3.5s cubic-bezierinfinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life. #smashing #animations #part #optimising #svgs
    SMASHINGMAGAZINE.COM
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designer (UK) and Sketch (Netherlands) are alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url(#grad)" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url(#trail)" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child(n + 1):nth-child(-n + 10) { animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child(n + 11):nth-child(-n + 20) { ... } /* Slow, dramatic */ #stars use:nth-child(n + 21):nth-child(-n + 30) { ... } /* Random */ #stars use:nth-child(3n + 2) { ... } /* Alternating */ #stars use:nth-child(4n + 1) { ... } /* Scattered */ #stars use:nth-child(n + 31) { ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotate(-0.8deg) translateY(-0.5px); } 100% { transform: rotate(0.9deg) translateY(0.3px); } } #head { animation: headWobble 0.8s cubic-bezier(0.5, 0.15, 0.5, 0.85) infinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZ(-4deg) rotateY(15deg) scaleX(0.96); } 33% { transform: rotateZ(5deg) rotateY(-10deg) scaleX(1.05); } 66% { transform: rotateZ(-2deg) rotateY(5deg) scaleX(0.98); } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezier(0.68, -0.55, 0.27, 1.55) infinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate(-5deg); } 30% { transform: rotate(-4deg); } 50%, 70% { transform: rotate(5deg); } 80% { transform: rotate(4deg); } 100% { transform: rotate(-5deg); } } #broom { animation: broomSwing 4s cubic-bezier(0.5, 0.05, 0.5, 0.95) infinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotate(-2.8deg) translateY(-0.8px) scale(0.998); } 30% { transform: rotate(1.5deg) translateY(0.3px); } 100% { transform: rotate(3.2deg) translateY(1.2px) scale(1.002); } } #yogi { animation: yogiWobble 3.5s cubic-bezier(.37, .14, .3, .86) infinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animation [forked] by Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    Like
    Love
    Wow
    Angry
    Sad
    273
    0 Yorumlar 0 hisse senetleri
  • Design to Code with the Figma MCP Server

    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code.
    Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky.
    You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size?
    There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally.
    Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design
    Get the MCP server running in CursorSet up a quick target repo
    Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings.
    Go to the Security tab.
    Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{
    "mcpServers": {
    "Framelink Figma MCP": {
    "command": "npx",
    "args":}
    }
    }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface.

    Figma frame: <;

    Please use the Figma MCP server.

    Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */}
    <div className="p-6 bg-white border-t border-">
    <div className="flex items-center space-x-4">
    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/>
    </svg>
    </button>

    <button className="p-2 rounded-full hover:bg-">
    <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/>
    </svg>
    </button>

    <div className="flex-1 relative">
    <div className="flex items-center bg-rounded-full px-4 py-3">
    <button className="p-1 rounded-full hover:bg-mr-3">
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/>
    </svg>
    </button>

    <input
    type="text"
    value={newMessage}
    onChange={=> setNewMessage}
    onKeyPress={handleKeyPress}
    placeholder="Type a message..."
    className="flex-1 bg-transparent outline-none text-placeholder-"
    />

    <button
    onClick={handleSendMessage}
    className="p-1 rounded-full hover:bg-ml-3"
    >
    <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24">
    <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/>
    </svg>
    </button>
    </div>
    </div>
    </div>
    </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code.
    Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc.
    Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating.
    Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list
    A CSS style guide
    A frameworkstyle guide
    Test suite rules
    Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right.
    There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority.
    It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent.
    Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    #design #code #with #figma #mcp
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocolservers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in CursorSet up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP clientNow that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine.My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args":} } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layerin Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <; Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-"> <svg className="w-6 h-6 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-mr-3"> <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={=> setNewMessage} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-placeholder-" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-ml-3" > <svg className="w-5 h-5 text-" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like. But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files, and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A frameworkstyle guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PRfor you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering! #design #code #with #figma #mcp
    WWW.BUILDER.IO
    Design to Code with the Figma MCP Server
    Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.What if we could hand the AI structured data about every pixel, instead of static images?This is how Figma Model Context Protocol (MCP) servers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.Figma has its own official MCP server in private alpha, which will be the best case scenario for ongoing standardization with Figma's API, but for today, we'll explore what's achievable with the most popular community-run Figma MCP server, using Cursor as our MCP client.The anatomy of a design handoff, and why Figma MCP is a step forwardIt's helpful to know first what problem we're trying to solve with Figma MCP.In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour: Someone in your org, usually with a lot of opinions, decides on a new feature, component, or page that needs added to the code. Your design team creates a mockup. It is beautiful and full of potential. If you're really lucky, it's even practical to implement in code. You're often not really lucky. You begin to think how to implement the design. Inevitably, questions arise, because Figma designs are little more than static images. What happens when you hover this button? Is there an animation on scroll? Is this still legible in tablet size? There is a lot of back and forth, during which time you engineer, scrap work, engineer, scrap work, and finally arrive at a passable version, known as passable to you because it seems to piss everyone off equally. Now, finally, you can do the fun part: finesse. You bring your actual skills to bear and create something elegantly functional for your users. There may be more iterations after this, but you're happy for now.Sound familiar? Hopefully, it goes better at your org.Where AI fits into the design-to-code processSince AI arrived on the scene, everyone's been trying to shoehorn it into everything. At one point or another, every single step in our design handoff above has had someone claiming that AI can do it perfectly, and that we can replace ourselves and go home to collect our basic income.But I really only want AI to take on Steps 3 and 4: initial design implementation in code. For the rest, I very much like humans in charge. This is why something like a design-to-code AI excites me. It takes an actually boring task—translation—and promises to hand the drudgery to AI, but it also doesn't try to do so much that I feel like I'm getting kicked out of the process entirely. AI scaffolds the boilerplate, and I can just edit the details.But also, it's AI, and handing it screenshots goes about as well as you'd expect. It's like if you've ever tried to draw a friend's face from memory. Sure, you can kinda tell it's them.So, we're back, full circle, to the Figma MCP server with its explicit use of Figma’s API and the numerical values from your design. Let's try it and see how much better the results may be.How to use the Figma MCP serverOkay, down to business. Feel free to follow along. We're going to:Get Figma credentials and a sample design Get the MCP server running in Cursor (or your client of choice) Set up a quick target repo Walk through an example design to code flowStep 1: Get your Figma file and credentialsIf you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.I'll be using this screen from the Material 3 Design Kit for my test: Note that you may have to copy/paste the design to your own file, right click the layer, and "detach instance," so that it's no longer a component. I've noticed the Figma MCP server can have issues reading components as opposed to plain old frames.Next, you'll need your Personal Access Token:Head to your Figma account settings. Go to the Security tab. Generate a new token with the permissions and expiry date you prefer.Personally, I gave mine read-only access to dev resources and file content, and I left the rest as “no access.”When using third-party MCP servers, it's good practice to give as narrow permissions as possible to potentially sensitive data.Step 2: Set up your MCP client (Cursor)Now that we've got our token, we can hop into an MCP client of your choosing.For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.In Cursor, head to Cursor Settings -> MCP -> Add new global MCP server. Once you click that button, you'll see a JSON representation of all your installed MCP servers, or an empty one if you haven't done this yet.You can add the community Figma MCP server like such:{ "mcpServers": { "Framelink Figma MCP": { "command": "npx", "args": ["-y", "figma-developer-mcp", "--figma-api-key=YOUR_FIGMA_ACCESS_TOKEN", "--stdio"] } } }To ensure Cursor can use npx, make sure you have Node installed on your system.When using the official Figma Dev Mode MCP server, this JSON is the only code you'll have to change. Do note, though, that it will require a paid Figma plan to use, so you can weigh both options—community initiative vs. standardized support.Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."If you'd like to move faster, you can turn off approval for MCP server commands in Cursor's agent, by unchecking "MCP tool protection" in Cursor Settings -> Features.Step 3: Set up a target repoNext, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest.Step 4: ShowtimeOkay, we should be all set. Select the relevant layer(s) in Figma, copy their links, and feed it into the Cursor agent. My prompt is just:Can you replace my homepage with this Figma design? It should be a basic chat interface. Figma frame: <https://www.figma.com/design/CPDcrzkVChAzQ3q1pC5mXd/Figma-MCP-vs.-Builder-Fusion?node-id=2-215&t=K6v805pKyoU4FqdA-4> Please use the Figma MCP server. Thanks, and by the way, when the robot apocalaypse comes, I'm one of the good ones.Let's see it in action:And here's an example of some of the resulting code that it made:{/* Message Input */} <div className="p-6 bg-white border-t border-[#E8DEF8]"> <div className="flex items-center space-x-4"> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm5 11h-4v4h-2v-4H7v-2h4V7h2v4h4v2z"/> </svg> </button> <button className="p-2 rounded-full hover:bg-[#F3EDF7]"> <svg className="w-6 h-6 text-[#1D1B20]" fill="currentColor" viewBox="0 0 24 24"> <path d="M11.99 2C6.47 2 2 6.48 2 12s4.47 10 9.99 10C17.52 22 22 17.52 22 12S17.52 2 11.99 2zM12 20c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8zm3.5-9c.83 0 1.5-.67 1.5-1.5S16.33 8 15.5 8 14 8.67 14 9.5s.67 1.5 1.5 1.5zm-7 0c.83 0 1.5-.67 1.5-1.5S9.33 8 8.5 8 7 8.67 7 9.5 7.67 11 8.5 11zm3.5 6.5c2.33 0 4.31-1.46 5.11-3.5H6.89c.8 2.04 2.78 3.5 5.11 3.5z"/> </svg> </button> <div className="flex-1 relative"> <div className="flex items-center bg-[#ECE6F0] rounded-full px-4 py-3"> <button className="p-1 rounded-full hover:bg-[#D0BCFF] mr-3"> <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z"/> </svg> </button> <input type="text" value={newMessage} onChange={(e) => setNewMessage(e.target.value)} onKeyPress={handleKeyPress} placeholder="Type a message..." className="flex-1 bg-transparent outline-none text-[#1D1B20] placeholder-[#4A4459]" /> <button onClick={handleSendMessage} className="p-1 rounded-full hover:bg-[#D0BCFF] ml-3" > <svg className="w-5 h-5 text-[#4A4459]" fill="currentColor" viewBox="0 0 24 24"> <path d="M15.5 14h-.79l-.28-.27C15.41 12.59 16 11.11 16 9.5 16 5.91 13.09 3 9.5 3S3 5.91 3 9.5 5.91 16 9.5 16c1.61 0 3.09-.59 4.23-1.57l.27.28v.79l5 4.99L20.49 19l-4.99-5zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z"/> </svg> </button> </div> </div> </div> </div>In total, the AI wrote at 278-line component that mostly works, in about two minutes. Honestly, not bad for a single shot.I can use a few more prompts to clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.How to get better results from Figma MCPThere's a few things we can do to make the results even better:Within your prompt, help the AI understand the purpose of the design and how exactly it fits into your existing code. Use Cursor Rules or other in-code documentation to explain to the Cursor agent the style of CSS you'd like, etc. Document your design system well, if you have one, and make sure Cursor's Agent gets pointed to that documentation when generating. Don't overwhelm the agent. Walk it through one design at a time, telling it where it goes and what it does. The process isn't fully automatic yet.Basically, it all boils down to more context, given granularly. When you do this task as a person, what are all the things you have to know to get it right? Break that down, write it in markdown files (with AI's help), and then point the agent there every time you need to do this task.Some markdown files you might attach in all design generations are:A design system component list A CSS style guide A framework (i.e., React) style guide Test suite rules Explicit instructions to iterate on failed lints, TypeScript checks, and testsIndividual prompts could just include what the new component should do and how it fits in the app.Since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results also depend on learning how to get the most out of Cursor. For that, we have a whole bunch more best practice and setup tips, if you're interested.More than anything, don't expect perfect results. Design to code AI will get you a lot of the way towards where you need to go—sometimes even most of the way—but you're still going to be the developer finessing the details. The goal is just to save a little time. You're not trying to replace yourself.Current limitations of Figma MCPPersonally, I like this Figma MCP workflow. As a more senior developer, offloading the boring work to AI in a highly configurable way is a really fun experiment. But there's still a lot of limitations.MCP is a dev-only playground. Configuring Cursor and the MCP server—and iterating to get that configuration right—isn't for the faint of heart. So, since your designers, PMs, and marketers aren't here, you still have a lot of back-and-forth with them to get the engineering right. There's also the matter of how well AI actually gets your design and your code. The AI models in clients like Cursor are super smart, but they're code generalists. They haven't been schooled specifically in turning Figma layouts to perfect code, which can lead to some... creative... interpretations. Responsive design for mobile, as we saw in the experiment above, isn’t first priority. It's not a deterministic process. Even if AI has perfect access to Figma data, it can still go off the rails. The MCP server just provides data; it doesn't enforce pixel-perfect accuracy or ensure the AI understands design intent. Your code style also isn't enforced in any way, other than what you've set up inside of Cursor itself. Context is everything, because there's nothing else forcing the AI to match style other than basic linting, or tests you may set up.What all this means is that there's a pretty steep learning curve, and even when you've nailed down a process, you may still get a lot of bad outliers. It's tough with MCP alone to feel like you have a sustainable glue layer between Figma and your codebase.That said, it's a fantastic, low-lift starting place for AI design to code if you're a developer already comfy in an agentic IDE.Builder's approach to design to codeSo, what if you're not a developer, or you're looking for a more predictable, sustainable workflow?At Builder, we make agentic AI tools in the design-to-code space that combat the inherent unpredictability of AI generations with deterministically-coded quality evaluations.Figma to code is a solved problem for us already. Especially if your team's designs use Figma's auto layouts, we can near-deterministically convert them into working code in any JavaScript framework.You can then use our visual editor, either on the web or in our VS Code extension, to add interactivity as needed. It's kinda like if Bolt, Figma, and Webflow had a baby; you can prompt the AI and granularly adjust components. Vibe code DOOM or just fix your padding. Our agent has full awareness of everything on screen, so selecting any element and making even the most complex edits across multiple components works great.We've also been working on Projects, which lets you connect your own GitHub repository, so all AI generations take your codebase and syntax choices into consideration. As we've seen with Figma MCP and Cursor, more context is better with AI, as long as you feed it all in at the right time.Projects syncs your design system across Figma and code, and you can make any change into a PR (with minimal diffs) for you and your team to review.One part we're really excited about with this workflow is how it lets designers, marketers, and product managers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.Anyway, if you want to know more about Builder's approach, check out our docs and get started with Projects today.So, is the Figma MCP worth your time?Using an MCP server to convert your designs to code is an awesome upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.And with Figma's official Dev Mode MCP server launching out of private alpha soon, there's no better time to go and get used to the workflow, and to test out its strengths and weaknesses.Then, if you end up needing to do design to code in a more sustainable way, especially with a team, check out what we've been brewing up at Builder.Happy design engineering!
    0 Yorumlar 0 hisse senetleri
  • Fenix Art Museum / MAD Architects

    Fenix Art Museum / MAD ArchitectsSave this picture!© Iwan BaanMuseum, Refurbishment•Rotterdam, The Netherlands

    Architects:
    MAD Architects
    Area
    Area of this architecture project

    Area: 
    8000 m²

    Year
    Completion year of this architecture project

    Year: 

    2025

    Photographs

    Photographs:

    Manufacturers
    Brands with products used in this architecture project

    Manufacturers:  Goppion

    Project Contractors:

    Products
    translation missing: en-US.post.svg.material_description

    More SpecsLess Specs
    this picture!
    Text description provided by the architects. Fenix is a major new museum that explores migration through the lens of art, opening on a landmark site in Rotterdam's City Harbor, developed by internationally acclaimed architects MAD. With a rapidly expanding collection of historic and contemporary objects, Fenix tells the story of migration through a series of encounters with art, architecture, photography, food, and history. Located in what was once part of the world's largest transshipment warehouse, on a peninsula in Rotterdam's historic port district, Fenix overlooks the docks where millions of migrant journeys began and ended. The monumental 16,000 square meter warehouse has been transformed to become Fenix by MAD Architects with restoration consultation by Bureau Polderman. This is MAD Architects' first commission for a public cultural building in Europe, as well as the first museum to be built by a Chinese firm in Europe. The project was initiated by the Droom en Daad Foundation, founded in 2016. The Foundation is helping redefine Rotterdam for the 21st century - developing new kinds of arts and culture institutions and fostering new creative talent that reflects the city's diversity, its spirit, and its historySave this picture!Restoration of the 172-meter-long façade of the former shipping and storage warehouse began in 2018, led by Bureau Polderman, and took a year and a half to complete. Some architectural details date back to 1923 when the warehouse opened, while others were part of the 1948-1950 reconstruction plan. In the past 60 years, many additions were made and the building's function changed many 4mes. The façade lacked uniformity. Fronts and frames were rusty. All elements along the façade have now been restored, refurnished, or rebuilt. The characteristic windows were restored to reflect the style of 1923. The 2,200 sqm expanse of the south façade was blast-cleaned and cement stucco was reapplied. The characteristic sliding doors at street level have been restored to their original post-war state, with doors and frames repainted in their original green color. A serene rhythm of columns, windows, and fronts has emerged that emphasizes the horizontal quality of the building.this picture!this picture!A defining new feature of the building is the Tornado - a double helix staircase evocative of rising air that climbs from the ground floor and flows up and out of the rooftop onto an outdoor platform offering spectacular panoramic views across Rotterdam and the Maas River, 24 meters above ground level. The dynamic structure is cladded in 297 polished stainless-steel panels, made in Groningen, Netherlands. The canopy that sits at the top of the structure is 17m in length and was transported by boat from Groningen to Rotterdam in pieces before being assembled and lifted into place. Inside the Tornado is a 550m long double-helix wooden staircase which emerges onto the platform, which can also be accessed via a central shaft.this picture!Inside the building are a series of vast gallery spaces spread over two floors, housing Fenix's growing art and historical collection, as well as a series of commissions by emerging artists from across the world. The ground floor contains exhibition and programming spaces, while the upstairs galleries are dedicated to the Fenix Collection. The museum is accessed via entrances in the centre of the north façade on the riverfront and the south façade. On arrival, visitors are immediately drawn to the base of the Tornado, whose dynamic, twisting form is lit by the glass roof above the central atrium that allows natural light to filter into the lobby. The entrance atrium features a welcome desk, museum shop, and café. At 2,275 sqm, Plein is a vast, flexible space for events and performances and will host a constantly changing programme of activity curated for and with Rotterdam's communities. Located on the ground floor on the Eastern side of the building, it features doors on three sides which can be opened out to create a welcoming covered public space. Fenix offers a number of dining options located throughout the building where visitors can encounter food cultures that have travelled the world.this picture!The top of the warehouse features a 6,750 sqm 'green roof', featuring sedum plants arranged in a concentric pattern, in line with the shape of the Tornado. As well as supporting biodiversity, green roofs provide insulation and store rainwater in the plants and substrate, releasing it back into the atmosphere through evaporation. This significantly reduces the burden on the sewerage system, reducing the risk of flooding and the burden on water treatment. The building uses a Thermal Energy System, which stores excess heat from the building in the soil. A heat pump is connected to the TES to produce the correct temperature for the building. The aquifer serves as the source for the heat pump. By using the heat pump and passive cooling, it is possible to save up to 60 percent in heating energy and 80 percent in cooling energy. The staircase of the Tornado is made from sustainable Norwegian wood called Kebony, a leading modified wood brand established in Oslo, Norway, that uses a proven, innovative, patented technology to enhance traditional 4mber. Biobased modified wood is a sustainable building material with a significantly lower environmental impact than other building materials. Fenix repurposes a 100-year-old warehouse, restored as much as possible to its original state in the 1950s, with interventions in line with the original architecture from 1923.this picture!this picture!The building has been designed in consultation with VGR, an association specializing in making buildings as accessible and welcoming as possible. Plein and the Atrium will be publicly accessible spaces that are free to enter.this picture!

    Project gallerySee allShow less
    Project locationAddress:Rotterdam, The NetherlandsLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeMAD ArchitectsOffice•••
    MaterialsSteelConcreteMaterials and TagsPublished on May 21, 2025Cite: "Fenix Art Museum / MAD Architects" 21 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #fenix #art #museum #mad #architects
    Fenix Art Museum / MAD Architects
    Fenix Art Museum / MAD ArchitectsSave this picture!© Iwan BaanMuseum, Refurbishment•Rotterdam, The Netherlands Architects: MAD Architects Area Area of this architecture project Area:  8000 m² Year Completion year of this architecture project Year:  2025 Photographs Photographs: Manufacturers Brands with products used in this architecture project Manufacturers:  Goppion Project Contractors: Products translation missing: en-US.post.svg.material_description More SpecsLess Specs this picture! Text description provided by the architects. Fenix is a major new museum that explores migration through the lens of art, opening on a landmark site in Rotterdam's City Harbor, developed by internationally acclaimed architects MAD. With a rapidly expanding collection of historic and contemporary objects, Fenix tells the story of migration through a series of encounters with art, architecture, photography, food, and history. Located in what was once part of the world's largest transshipment warehouse, on a peninsula in Rotterdam's historic port district, Fenix overlooks the docks where millions of migrant journeys began and ended. The monumental 16,000 square meter warehouse has been transformed to become Fenix by MAD Architects with restoration consultation by Bureau Polderman. This is MAD Architects' first commission for a public cultural building in Europe, as well as the first museum to be built by a Chinese firm in Europe. The project was initiated by the Droom en Daad Foundation, founded in 2016. The Foundation is helping redefine Rotterdam for the 21st century - developing new kinds of arts and culture institutions and fostering new creative talent that reflects the city's diversity, its spirit, and its historySave this picture!Restoration of the 172-meter-long façade of the former shipping and storage warehouse began in 2018, led by Bureau Polderman, and took a year and a half to complete. Some architectural details date back to 1923 when the warehouse opened, while others were part of the 1948-1950 reconstruction plan. In the past 60 years, many additions were made and the building's function changed many 4mes. The façade lacked uniformity. Fronts and frames were rusty. All elements along the façade have now been restored, refurnished, or rebuilt. The characteristic windows were restored to reflect the style of 1923. The 2,200 sqm expanse of the south façade was blast-cleaned and cement stucco was reapplied. The characteristic sliding doors at street level have been restored to their original post-war state, with doors and frames repainted in their original green color. A serene rhythm of columns, windows, and fronts has emerged that emphasizes the horizontal quality of the building.this picture!this picture!A defining new feature of the building is the Tornado - a double helix staircase evocative of rising air that climbs from the ground floor and flows up and out of the rooftop onto an outdoor platform offering spectacular panoramic views across Rotterdam and the Maas River, 24 meters above ground level. The dynamic structure is cladded in 297 polished stainless-steel panels, made in Groningen, Netherlands. The canopy that sits at the top of the structure is 17m in length and was transported by boat from Groningen to Rotterdam in pieces before being assembled and lifted into place. Inside the Tornado is a 550m long double-helix wooden staircase which emerges onto the platform, which can also be accessed via a central shaft.this picture!Inside the building are a series of vast gallery spaces spread over two floors, housing Fenix's growing art and historical collection, as well as a series of commissions by emerging artists from across the world. The ground floor contains exhibition and programming spaces, while the upstairs galleries are dedicated to the Fenix Collection. The museum is accessed via entrances in the centre of the north façade on the riverfront and the south façade. On arrival, visitors are immediately drawn to the base of the Tornado, whose dynamic, twisting form is lit by the glass roof above the central atrium that allows natural light to filter into the lobby. The entrance atrium features a welcome desk, museum shop, and café. At 2,275 sqm, Plein is a vast, flexible space for events and performances and will host a constantly changing programme of activity curated for and with Rotterdam's communities. Located on the ground floor on the Eastern side of the building, it features doors on three sides which can be opened out to create a welcoming covered public space. Fenix offers a number of dining options located throughout the building where visitors can encounter food cultures that have travelled the world.this picture!The top of the warehouse features a 6,750 sqm 'green roof', featuring sedum plants arranged in a concentric pattern, in line with the shape of the Tornado. As well as supporting biodiversity, green roofs provide insulation and store rainwater in the plants and substrate, releasing it back into the atmosphere through evaporation. This significantly reduces the burden on the sewerage system, reducing the risk of flooding and the burden on water treatment. The building uses a Thermal Energy System, which stores excess heat from the building in the soil. A heat pump is connected to the TES to produce the correct temperature for the building. The aquifer serves as the source for the heat pump. By using the heat pump and passive cooling, it is possible to save up to 60 percent in heating energy and 80 percent in cooling energy. The staircase of the Tornado is made from sustainable Norwegian wood called Kebony, a leading modified wood brand established in Oslo, Norway, that uses a proven, innovative, patented technology to enhance traditional 4mber. Biobased modified wood is a sustainable building material with a significantly lower environmental impact than other building materials. Fenix repurposes a 100-year-old warehouse, restored as much as possible to its original state in the 1950s, with interventions in line with the original architecture from 1923.this picture!this picture!The building has been designed in consultation with VGR, an association specializing in making buildings as accessible and welcoming as possible. Plein and the Atrium will be publicly accessible spaces that are free to enter.this picture! Project gallerySee allShow less Project locationAddress:Rotterdam, The NetherlandsLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeMAD ArchitectsOffice••• MaterialsSteelConcreteMaterials and TagsPublished on May 21, 2025Cite: "Fenix Art Museum / MAD Architects" 21 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #fenix #art #museum #mad #architects
    WWW.ARCHDAILY.COM
    Fenix Art Museum / MAD Architects
    Fenix Art Museum / MAD ArchitectsSave this picture!© Iwan BaanMuseum, Refurbishment•Rotterdam, The Netherlands Architects: MAD Architects Area Area of this architecture project Area:  8000 m² Year Completion year of this architecture project Year:  2025 Photographs Photographs: Manufacturers Brands with products used in this architecture project Manufacturers:  Goppion Project Contractors: Products translation missing: en-US.post.svg.material_description More SpecsLess Specs Save this picture! Text description provided by the architects. Fenix is a major new museum that explores migration through the lens of art, opening on a landmark site in Rotterdam's City Harbor, developed by internationally acclaimed architects MAD. With a rapidly expanding collection of historic and contemporary objects, Fenix tells the story of migration through a series of encounters with art, architecture, photography, food, and history. Located in what was once part of the world's largest transshipment warehouse, on a peninsula in Rotterdam's historic port district, Fenix overlooks the docks where millions of migrant journeys began and ended. The monumental 16,000 square meter warehouse has been transformed to become Fenix by MAD Architects with restoration consultation by Bureau Polderman. This is MAD Architects' first commission for a public cultural building in Europe, as well as the first museum to be built by a Chinese firm in Europe. The project was initiated by the Droom en Daad Foundation, founded in 2016. The Foundation is helping redefine Rotterdam for the 21st century - developing new kinds of arts and culture institutions and fostering new creative talent that reflects the city's diversity, its spirit, and its historySave this picture!Restoration of the 172-meter-long façade of the former shipping and storage warehouse began in 2018, led by Bureau Polderman, and took a year and a half to complete. Some architectural details date back to 1923 when the warehouse opened, while others were part of the 1948-1950 reconstruction plan. In the past 60 years, many additions were made and the building's function changed many 4mes. The façade lacked uniformity. Fronts and frames were rusty. All elements along the façade have now been restored, refurnished, or rebuilt. The characteristic windows were restored to reflect the style of 1923. The 2,200 sqm expanse of the south façade was blast-cleaned and cement stucco was reapplied. The characteristic sliding doors at street level have been restored to their original post-war state, with doors and frames repainted in their original green color. A serene rhythm of columns, windows, and fronts has emerged that emphasizes the horizontal quality of the building.Save this picture!Save this picture!A defining new feature of the building is the Tornado - a double helix staircase evocative of rising air that climbs from the ground floor and flows up and out of the rooftop onto an outdoor platform offering spectacular panoramic views across Rotterdam and the Maas River, 24 meters above ground level. The dynamic structure is cladded in 297 polished stainless-steel panels, made in Groningen, Netherlands. The canopy that sits at the top of the structure is 17m in length and was transported by boat from Groningen to Rotterdam in pieces before being assembled and lifted into place. Inside the Tornado is a 550m long double-helix wooden staircase which emerges onto the platform, which can also be accessed via a central shaft.Save this picture!Inside the building are a series of vast gallery spaces spread over two floors, housing Fenix's growing art and historical collection, as well as a series of commissions by emerging artists from across the world. The ground floor contains exhibition and programming spaces, while the upstairs galleries are dedicated to the Fenix Collection. The museum is accessed via entrances in the centre of the north façade on the riverfront and the south façade. On arrival, visitors are immediately drawn to the base of the Tornado, whose dynamic, twisting form is lit by the glass roof above the central atrium that allows natural light to filter into the lobby. The entrance atrium features a welcome desk, museum shop, and café. At 2,275 sqm, Plein is a vast, flexible space for events and performances and will host a constantly changing programme of activity curated for and with Rotterdam's communities. Located on the ground floor on the Eastern side of the building, it features doors on three sides which can be opened out to create a welcoming covered public space. Fenix offers a number of dining options located throughout the building where visitors can encounter food cultures that have travelled the world.Save this picture!The top of the warehouse features a 6,750 sqm 'green roof', featuring sedum plants arranged in a concentric pattern, in line with the shape of the Tornado. As well as supporting biodiversity, green roofs provide insulation and store rainwater in the plants and substrate, releasing it back into the atmosphere through evaporation. This significantly reduces the burden on the sewerage system, reducing the risk of flooding and the burden on water treatment. The building uses a Thermal Energy System (TES), which stores excess heat from the building in the soil. A heat pump is connected to the TES to produce the correct temperature for the building. The aquifer serves as the source for the heat pump. By using the heat pump and passive cooling, it is possible to save up to 60 percent in heating energy and 80 percent in cooling energy. The staircase of the Tornado is made from sustainable Norwegian wood called Kebony, a leading modified wood brand established in Oslo, Norway, that uses a proven, innovative, patented technology to enhance traditional 4mber. Biobased modified wood is a sustainable building material with a significantly lower environmental impact than other building materials. Fenix repurposes a 100-year-old warehouse, restored as much as possible to its original state in the 1950s, with interventions in line with the original architecture from 1923.Save this picture!Save this picture!The building has been designed in consultation with VGR, an association specializing in making buildings as accessible and welcoming as possible. Plein and the Atrium will be publicly accessible spaces that are free to enter.Save this picture! Project gallerySee allShow less Project locationAddress:Rotterdam, The NetherlandsLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeMAD ArchitectsOffice••• MaterialsSteelConcreteMaterials and TagsPublished on May 21, 2025Cite: "Fenix Art Museum / MAD Architects" 21 May 2025. ArchDaily. Accessed . <https://www.archdaily.com/1030328/fenix-art-museum-mad-architects&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Yorumlar 0 hisse senetleri
CGShares https://cgshares.com