• HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Reacties 0 aandelen
  • In a world where cloud computing has become the digital equivalent of air (you know, something everyone breathes in but no one really thinks about), the latest trend in datacenter technology is to send our precious data skyrocketing into the cosmos. Yes, you read that right—space-based datacenters are the new buzzword, because why let earthly problems like power outages or NIMBYism stop us from storing our data in the great beyond?

    Imagine the scene: while we sit in traffic on our way to work, feeling the weight of our earthly responsibilities, there are engineers in space suits, floating around in zero gravity, managing data storage like it’s just another day at the office. I mean, who needs a reliable power grid when you can have the cosmic energy of a thousand suns powering your Netflix binge-watching session? Talk about an upgrade!

    Of course, this leap into the stratosphere isn't without its challenges. What happens if there’s a little too much space debris? Will our precious selfies come crashing back down to Earth? Or worse, will they be lost forever among the stars? But fear not! The tech-savvy geniuses behind this initiative have assured us that they have a plan. Clearly, the best minds of our generation are focused on ensuring your TikTok videos stay safe in orbit rather than, say, solving world hunger or climate change. Priorities, am I right?

    Let’s not forget about the cost. Space travel isn’t exactly cheap. But hey, if I’m going to spend a fortune on data storage, I’d rather it be orbiting Earth than sitting in a basement somewhere in New Jersey. Because nothing says “I’m a forward-thinking tech mogul” quite like a datacenter floating serenely above the clouds, right? It’s the ultimate status symbol—better than a sports car, better than a mansion. “Look at me! My data is literally out of this world!”

    And let’s be real, the power of AI is growing faster than a toddler on a sugar rush. Our current datacenters are sweating bullets trying to keep up. So, the solution? Just toss them into orbit! Sure, it sounds like a plot from a sci-fi movie, but who needs a solid plan when you have a vision, right? The next logical step is to start launching all our problems into space. Traffic jams? Launch them! Your ex? Into orbit they go!

    So, here's to the brave souls who will be managing our digital lives from afar. May your Wi-Fi connection be strong, may your satellite dishes be well-aligned, and may your cosmic data never experience latency. Because if there’s one thing we can all agree on, it's that our data deserves a first-class ticket to space, even if it means leaving the rest of the world behind.

    #SpaceBasedDatacenters #CloudComputing #DataInOrbit #TechTrends #AIFuture
    In a world where cloud computing has become the digital equivalent of air (you know, something everyone breathes in but no one really thinks about), the latest trend in datacenter technology is to send our precious data skyrocketing into the cosmos. Yes, you read that right—space-based datacenters are the new buzzword, because why let earthly problems like power outages or NIMBYism stop us from storing our data in the great beyond? Imagine the scene: while we sit in traffic on our way to work, feeling the weight of our earthly responsibilities, there are engineers in space suits, floating around in zero gravity, managing data storage like it’s just another day at the office. I mean, who needs a reliable power grid when you can have the cosmic energy of a thousand suns powering your Netflix binge-watching session? Talk about an upgrade! Of course, this leap into the stratosphere isn't without its challenges. What happens if there’s a little too much space debris? Will our precious selfies come crashing back down to Earth? Or worse, will they be lost forever among the stars? But fear not! The tech-savvy geniuses behind this initiative have assured us that they have a plan. Clearly, the best minds of our generation are focused on ensuring your TikTok videos stay safe in orbit rather than, say, solving world hunger or climate change. Priorities, am I right? Let’s not forget about the cost. Space travel isn’t exactly cheap. But hey, if I’m going to spend a fortune on data storage, I’d rather it be orbiting Earth than sitting in a basement somewhere in New Jersey. Because nothing says “I’m a forward-thinking tech mogul” quite like a datacenter floating serenely above the clouds, right? It’s the ultimate status symbol—better than a sports car, better than a mansion. “Look at me! My data is literally out of this world!” And let’s be real, the power of AI is growing faster than a toddler on a sugar rush. Our current datacenters are sweating bullets trying to keep up. So, the solution? Just toss them into orbit! Sure, it sounds like a plot from a sci-fi movie, but who needs a solid plan when you have a vision, right? The next logical step is to start launching all our problems into space. Traffic jams? Launch them! Your ex? Into orbit they go! So, here's to the brave souls who will be managing our digital lives from afar. May your Wi-Fi connection be strong, may your satellite dishes be well-aligned, and may your cosmic data never experience latency. Because if there’s one thing we can all agree on, it's that our data deserves a first-class ticket to space, even if it means leaving the rest of the world behind. #SpaceBasedDatacenters #CloudComputing #DataInOrbit #TechTrends #AIFuture
    Space-Based Datacenters Take The Cloud into Orbit
    Where’s the best place for a datacenter? It’s an increasing problem as the AI buildup continues seemingly without pause. It’s not just a problem of NIMBYism; earthly power grids are …read more
    Like
    Love
    Wow
    Angry
    Sad
    264
    1 Reacties 0 aandelen
  • The 25 creative studios inspiring us the most in 2025

    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society.
    From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before.
    In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now.
    1. Porto Rocha
    Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design.
    For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence.
    As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with!

    2. DixonBaxi
    Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation.
    They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation.
    And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us."

    3. Mother
    Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging."
    4. Studio Dumbar/DEPT®
    Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum.
    In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition".
    5. HONDO
    Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products.
    This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship.

    6. Smith & Diction
    Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding.
    Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website.
    Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional.

    7. DNCO
    DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London.
    Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York."
    DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character.

    8. Hey Studio
    Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose.
    A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community.
    As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face.

    9. Koto
    Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges.
    Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets.
    Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here.

    10. Robot Food
    Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design.
    Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics.
    The results are visually striking, with black backgrounds prominently featured, punctuated by vibrant fruit illustrations and flavour-coded colours. about the project here.

    11. Saffron Brand Consultants
    Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands.
    One 2025 highlight so far has been their work for Saudi National Bankto create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia.
    Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions.
    12. Alright Studio
    Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling.
    Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content.
    13. Wolff Olins
    Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally.
    A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean.
    Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself.

    14. COLLINS
    Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark.
    The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it.
    Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow.
    15. Studio Spass
    Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair.
    Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!"

    16. Applied Design Works
    Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients.
    We were thrilled by their recent work for Grand Central Madison, where they were instrumental in ushering in a new era for the transportation hub.
    Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it.

    17. The Chase
    The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio."
    Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered.

    18. A Practice for Everyday Life
    Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original.
    Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠

    A Practice for Everyday Life. Photo: Carol Sachs

    Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park

    La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank

    CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković

    19. Studio Nari
    Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe."
    One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community.
    The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time.
    20. Beetroot Design Group
    Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events.
    The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation".
    21. Kind Studio
    Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs.
    One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message.

    22. Slug Global
    Slug Global is a creative agency and art collective founded by artist and musician Bosco. Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha.
    One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women.

    23. Little Troop
    New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids.
    One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards.
    Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun.

    24. Morcos Key
    Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression.
    One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content.
    25. Thirst
    Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry.
    To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel.
    #creative #studios #inspiring #most
    The 25 creative studios inspiring us the most in 2025
    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society. From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before. In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now. 1. Porto Rocha Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design. For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence. As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with! 2. DixonBaxi Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation. They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation. And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us." 3. Mother Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging." 4. Studio Dumbar/DEPT® Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum. In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition". 5. HONDO Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products. This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship. 6. Smith & Diction Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding. Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website. Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional. 7. DNCO DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London. Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York." DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character. 8. Hey Studio Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose. A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community. As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face. 9. Koto Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges. Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets. Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here. 10. Robot Food Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design. Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics. The results are visually striking, with black backgrounds prominently featured, punctuated by vibrant fruit illustrations and flavour-coded colours. about the project here. 11. Saffron Brand Consultants Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands. One 2025 highlight so far has been their work for Saudi National Bankto create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia. Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions. 12. Alright Studio Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling. Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content. 13. Wolff Olins Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally. A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean. Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself. 14. COLLINS Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark. The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it. Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow. 15. Studio Spass Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair. Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!" 16. Applied Design Works Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients. We were thrilled by their recent work for Grand Central Madison, where they were instrumental in ushering in a new era for the transportation hub. Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it. 17. The Chase The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio." Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered. 18. A Practice for Everyday Life Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original. Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠ A Practice for Everyday Life. Photo: Carol Sachs Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković 19. Studio Nari Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe." One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community. The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time. 20. Beetroot Design Group Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events. The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation". 21. Kind Studio Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs. One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message. 22. Slug Global Slug Global is a creative agency and art collective founded by artist and musician Bosco. Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha. One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women. 23. Little Troop New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids. One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards. Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun. 24. Morcos Key Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression. One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content. 25. Thirst Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry. To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel. #creative #studios #inspiring #most
    WWW.CREATIVEBOOM.COM
    The 25 creative studios inspiring us the most in 2025
    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society. From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before. In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now. 1. Porto Rocha Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design. For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence. As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with! 2. DixonBaxi Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation. They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation. And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us." 3. Mother Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging." 4. Studio Dumbar/DEPT® Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum. In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition". 5. HONDO Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products. This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship. 6. Smith & Diction Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding. Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website. Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional. 7. DNCO DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London. Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York." DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character. 8. Hey Studio Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose. A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community. As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face. 9. Koto Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges. Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets. Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here. 10. Robot Food Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design. Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics. The results are visually striking, with black backgrounds prominently featured (a rarity in the health drink aisle), punctuated by vibrant fruit illustrations and flavour-coded colours. Read more about the project here. 11. Saffron Brand Consultants Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands. One 2025 highlight so far has been their work for Saudi National Bank (SNB) to create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia. Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions. 12. Alright Studio Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling. Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content. 13. Wolff Olins Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally. A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean. Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself. 14. COLLINS Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark. The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it. Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow. 15. Studio Spass Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair. Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!" 16. Applied Design Works Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients. We were thrilled by their recent work for Grand Central Madison (the station that connects Long Island to Grand Central Terminal), where they were instrumental in ushering in a new era for the transportation hub. Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it. 17. The Chase The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio." Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered. 18. A Practice for Everyday Life Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original. Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠ A Practice for Everyday Life. Photo: Carol Sachs Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković 19. Studio Nari Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe." One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community. The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time. 20. Beetroot Design Group Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events. The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation". 21. Kind Studio Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs. One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message. 22. Slug Global Slug Global is a creative agency and art collective founded by artist and musician Bosco (Brittany Bosco). Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha. One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women. 23. Little Troop New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids. One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards. Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun. 24. Morcos Key Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression. One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content. 25. Thirst Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry. To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Reacties 0 aandelen
  • Mock up a website in five prompts

    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box.
    Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website.
    Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe
    Core pages
    Primary goal
    Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon.
    2. Home, About, Pricing / Subscription Box, Menu.
    3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.”
    4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for:
    • Home
    • About
    • Pricing
    • Menu
    Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon.
    The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    #mock #website #five #prompts
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu. 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today. #mock #website #five #prompts
    WWW.BUILDER.IO
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu (with daily specials). 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.(Free tier Builder gives you 5 AI credits/day and 25/month—plenty to follow along with today’s demo. Upgrade only when you need it.)An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.(You can grab our design-to-code guide for a lot more ideas of what this can help you accomplish.)Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    0 Reacties 0 aandelen
  • Mirela Cialai Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential.
    That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success.
    In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers.
    You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI.
    Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.

     
    Mirela Cialai Q&A Interview
    1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience?

    Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives.

    This could be revenue growth, customer retention, market expansion, or operational efficiency.
    We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition.
    We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals.
    In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance.
    This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth.
    Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings.
    Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences.
    To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale.

    By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals.

    2. What steps did you take to ensure data accuracy?
    The data team was very diligent in ensuring that our data warehouse had accurate data.
    So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc.

    That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data.

    3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy?
    Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability.
    I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%.
    This data helps make a compelling case to stakeholders about the importance of prioritizing retention.
    Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth.
    This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives.

    By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy.

    4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement?
    Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach.
    The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives.
    I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse.
    Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows.
    Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities.

    Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape.

    5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for?
    I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels.
    Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns.
    Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns.
    Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability.

    If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs.

    6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap?
    Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes.
    Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact.
    Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert.

    By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success.

    7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives?
    To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success.
    Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value.
    Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities.
    Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth.
    By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs.

    In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability.

    In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first.
    8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you?
    Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability.
    We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success.
    To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams.

    To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together.

    9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like?
    A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine.
    In one word: PAPER. Here’s how it breaks down.

    Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals.
    Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps.
    Prioritize: initiatives based on impact, feasibility, and ROI potential.
    Execute: by implementing the roadmap in manageable phases.
    Refine: by continuously improving CRM performance and refining the roadmap.

    So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy.

    10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively?
    The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences.

    The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth.

    Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies.
    The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes.
    Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution.
    A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions.
    Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others.
    While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends.

    By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success.

    11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind?
    I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives.
    Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives.

    Another important lesson: The roadmap is only as effective as the data and systems it’s built upon.

    I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on.
    A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers.

    So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.

     

     
    This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #mirela #cialai #qampampa #customer #engagement
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage. #mirela #cialai #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Reacties 0 aandelen
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Reacties 0 aandelen
  • Five Climate Issues to Watch When Trump Goes to Canada

    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    #five #climate #issues #watch #when
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals. #five #climate #issues #watch #when
    WWW.SCIENTIFICAMERICAN.COM
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best, [most] realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    0 Reacties 0 aandelen
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Reacties 0 aandelen
  • Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory

    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light

    Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask.
    Courtesy of the researchers via MIT

    In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred.
    Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work.
    In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light.
    The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt.

    Meet the engineer who invented an AI-powered way to restore art
    Watch on

    While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible.
    As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory.
    That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today.
    The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible.
    “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
    The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study.
    Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken.

    Overview of Physically-Applied Digital Restoration
    Watch on

    “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.”
    The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply.
    Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.”

    Get the latest stories in your inbox every weekday.
    #graduate #student #develops #aibased #approach
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday. #graduate #student #develops #aibased #approach
    WWW.SMITHSONIANMAG.COM
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday.
    0 Reacties 0 aandelen