• Smashing Animations Part 4: Optimising SVGs

    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations.
    But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it.

    Start Clean And Design With Optimisation In Mind
    Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths.
    Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks.

    Beginning With Outlines
    For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference.

    Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them.

    Drawing Simple Background Shapes
    With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size.

    Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes.

    Optimising The Code
    It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size.

    Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs.

    Layering SVG Elements
    My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic.

    Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future:
    <svg ...>
    <defs>
    <!-- ... -->
    </defs>
    <path fill="url" d="…"/>
    <!-- TITLE GRAPHIC -->
    <g>
    <path … />
    <!-- ... -->
    </g>
    </svg>

    Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers:
    <svg ...>
    <defs>
    <linearGradient id="grad" …>…</linearGradient>
    <filter id="trail" …>…</filter>
    </defs>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <path filter="url" …/>
    <!-- TITLE GRAPHIC -->
    </svg>

    Then come the magical stars, added in the same sequential fashion:
    <svg ...>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <!-- STARS -->
    <!-- TITLE GRAPHIC -->
    </svg>

    To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi:
    <g id="yogi">...</g>

    Then I build Yogi from the ground up — starting with background props, like his broom:
    <g id="broom">...</g>

    Followed by grouped elements for his body, head, collar, and tie:
    <g id="yogi">
    <g id="broom">…</g>
    <g id="body">…</g>
    <g id="head">…</g>
    <g id="collar">…</g>
    <g id="tie">…</g>
    </g>

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify.
    Reusing Elements With <use>
    When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB:
    <g id="stars">
    <path class="star-small" fill="#eae3da" d="..."/>
    <path class="star-medium" fill="#eae3da" d="..."/>
    <path class="star-large" fill="#eae3da" d="..."/>
    <!-- ... -->
    </g>

    Moving the stars’ fill attribute values to their parent group reduces the overall weight a little:
    <g id="stars" fill="#eae3da">
    <path class="star-small" d="…"/>
    <path class="star-medium" d="…"/>
    <path class="star-large" d="…"/>
    <!-- ... -->
    </g>

    But a more efficient and manageable option is to define each star size as a reusable template:

    <defs>
    <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/>
    </defs>

    With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes:
    <g id="stars">
    <!-- Large stars -->
    <use href="#star-large" x="1575" y="495"/>
    <!-- ... -->
    <!-- Medium stars -->
    <use href="#star-medium" x="1453" y="696"/>
    <!-- ... -->
    <!-- Small stars -->
    <use href="#star-small" x="1287" y="741"/>
    <!-- ... -->
    </g>

    This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance.
    Adding Animations
    The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels:
    @keyframes sparkle {
    0%, 100% { opacity: .1; }
    50% { opacity: 1; }
    }

    Next, I applied this looping animation to every use element inside my stars group:
    #stars use {
    animation: sparkle 10s ease-in-out infinite;
    }

    The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects:
    /* Fast, frequent */
    #stars use:nth-child:nth-child{
    animation-delay: .1s;
    animation-duration: 2s;
    }

    From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses:
    /* Medium */
    #stars use:nth-child:nth-child{ ... }

    /* Slow, dramatic */
    #stars use:nth-child:nth-child{ ... }

    /* Random */
    #stars use:nth-child{ ... }

    /* Alternating */
    #stars use:nth-child{ ... }

    /* Scattered */
    #stars use:nth-child{ ... }

    By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle.

    Then, for added realism, I make Yogi’s head wobble:

    @keyframes headWobble {
    0% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateY; }
    }

    #head {
    animation: headWobble 0.8s cubic-bezierinfinite alternate;
    }

    His tie waves:

    @keyframes tieWave {
    0%, 100% { transform: rotateZrotateYscaleX; }
    33% { transform: rotateZrotateYscaleX; }
    66% { transform: rotateZrotateYscaleX; }
    }

    #tie {
    transform-style: preserve-3d;
    animation: tieWave 10s cubic-bezierinfinite;
    }

    His broom swings:

    @keyframes broomSwing {
    0%, 20% { transform: rotate; }
    30% { transform: rotate; }
    50%, 70% { transform: rotate; }
    80% { transform: rotate; }
    100% { transform: rotate; }
    }

    #broom {
    animation: broomSwing 4s cubic-bezierinfinite;
    }

    And, finally, Yogi himself gently rotates as he flies on his magical broom:

    @keyframes yogiWobble {
    0% { transform: rotatetranslateYscale; }
    30% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateYscale; }
    }

    #yogi {
    animation: yogiWobble 3.5s cubic-bezierinfinite alternate;
    }

    All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript.
    Try this yourself:
    See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke.
    Conclusion
    Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same:

    Start clean,
    Optimise early, and
    Structure everything with animation in mind.

    SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    #smashing #animations #part #optimising #svgs
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child:nth-child{ animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child:nth-child{ ... } /* Slow, dramatic */ #stars use:nth-child:nth-child{ ... } /* Random */ #stars use:nth-child{ ... } /* Alternating */ #stars use:nth-child{ ... } /* Scattered */ #stars use:nth-child{ ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotatetranslateY; } 100% { transform: rotatetranslateY; } } #head { animation: headWobble 0.8s cubic-bezierinfinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZrotateYscaleX; } 33% { transform: rotateZrotateYscaleX; } 66% { transform: rotateZrotateYscaleX; } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezierinfinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate; } 30% { transform: rotate; } 50%, 70% { transform: rotate; } 80% { transform: rotate; } 100% { transform: rotate; } } #broom { animation: broomSwing 4s cubic-bezierinfinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotatetranslateYscale; } 30% { transform: rotatetranslateY; } 100% { transform: rotatetranslateYscale; } } #yogi { animation: yogiWobble 3.5s cubic-bezierinfinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life. #smashing #animations #part #optimising #svgs
    SMASHINGMAGAZINE.COM
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designer (UK) and Sketch (Netherlands) are alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url(#grad)" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url(#trail)" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child(n + 1):nth-child(-n + 10) { animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child(n + 11):nth-child(-n + 20) { ... } /* Slow, dramatic */ #stars use:nth-child(n + 21):nth-child(-n + 30) { ... } /* Random */ #stars use:nth-child(3n + 2) { ... } /* Alternating */ #stars use:nth-child(4n + 1) { ... } /* Scattered */ #stars use:nth-child(n + 31) { ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotate(-0.8deg) translateY(-0.5px); } 100% { transform: rotate(0.9deg) translateY(0.3px); } } #head { animation: headWobble 0.8s cubic-bezier(0.5, 0.15, 0.5, 0.85) infinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZ(-4deg) rotateY(15deg) scaleX(0.96); } 33% { transform: rotateZ(5deg) rotateY(-10deg) scaleX(1.05); } 66% { transform: rotateZ(-2deg) rotateY(5deg) scaleX(0.98); } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezier(0.68, -0.55, 0.27, 1.55) infinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate(-5deg); } 30% { transform: rotate(-4deg); } 50%, 70% { transform: rotate(5deg); } 80% { transform: rotate(4deg); } 100% { transform: rotate(-5deg); } } #broom { animation: broomSwing 4s cubic-bezier(0.5, 0.05, 0.5, 0.95) infinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotate(-2.8deg) translateY(-0.8px) scale(0.998); } 30% { transform: rotate(1.5deg) translateY(0.3px); } 100% { transform: rotate(3.2deg) translateY(1.2px) scale(1.002); } } #yogi { animation: yogiWobble 3.5s cubic-bezier(.37, .14, .3, .86) infinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animation [forked] by Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    Like
    Love
    Wow
    Angry
    Sad
    273
    0 Comments 0 Shares
  • Microsoft's Xbox Handheld Plans Reportedly Shelved; Company to Optimise Windows 11 Gaming Performance

    Microsoft has paused the development of its Xbox handheld gaming console, according to a report. Previously expected to arrive as part of the company's next generation of consoles, the native Xbox handheld has been put on the back-burner. The Redmond company is reportedly working on optimising Windows 11 for handheld consoles, so that it is on par with Valve's SteamOS, which offers better performance and battery efficiency. Other upcoming consoles, like the Xbox-branded Asus deviceare said to be unaffected by Microsoft's decision.Microsoft Shifts Focus to Windows 11 Amid Threat From SteamOSWindows Central reports that Microsoft's internal Xbox handheld console has been shelved, which indicates that it might not arrive in 2027, alongside Microsoft's next-gen Xbox consoles. The first party handheld is not the same as other upcoming portable consoles like Asus' Project Kennan, which is still expected to arrive later this year.The company plans to work on optimising Windows 11 to run on handheld consoles, which means that upcoming third party handhelds could arrive with a more optimised version of Microsoft's desktop operating system. In our reviews of previously released handhelds, we've found that some of the biggest issues with Windows running on these devices include poor battery life, navigation issues, and software updates.Microsoft's decision to focus its efforts on Windows 11 for handhelds might have been spurred by the SteamOS' expansion beyond the Steam Deck. Earlier this year, Lenovo unveiled the Legion Go S, which offers better performance than the Steam Deck, and also runs on Valve's operating system. SteamOS is also expected to arrive on similar handheld devices in the future.SteamOS-powered devices won't be Microsoft's only concern, with the Nintendo Switch 2 right around the corner. The Japanese firm's handheld is slated to arrive in select markets in June, and will compete with existing portable consoles.The Redmond company's focus on optimising Windows 11 for handhelds could improve the overall experience of using these devices. Asus is expected to launch its Project Kennan console later in 2025, as per recent reports. The device was recently spotted in a listing on the US FCC website, giving us a good look at its design.

    The report indicates that the first-party handheld from Microsoft might have been capable of running Xbox games, and the company still plans to launch a native handheld. It's currently unclear whether this device will make its debut in 2027 or 2028, which is when the company's next-gen consoles are expected to arrive.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Microsoft, Xbox Handheld, Xbox, Handheld Consoles

    David Delima

    As a writer on technology with Gadgets 360, David Delima is interested in open-source technology, cybersecurity, consumer privacy, and loves to read and write about how the Internet works. David can be contacted via email at DavidD@ndtv.com, on Twitter at @DxDavey, and Mastodon at mstdn.social/@delima.
    More

    Related Stories
    #microsoft039s #xbox #handheld #plans #reportedly
    Microsoft's Xbox Handheld Plans Reportedly Shelved; Company to Optimise Windows 11 Gaming Performance
    Microsoft has paused the development of its Xbox handheld gaming console, according to a report. Previously expected to arrive as part of the company's next generation of consoles, the native Xbox handheld has been put on the back-burner. The Redmond company is reportedly working on optimising Windows 11 for handheld consoles, so that it is on par with Valve's SteamOS, which offers better performance and battery efficiency. Other upcoming consoles, like the Xbox-branded Asus deviceare said to be unaffected by Microsoft's decision.Microsoft Shifts Focus to Windows 11 Amid Threat From SteamOSWindows Central reports that Microsoft's internal Xbox handheld console has been shelved, which indicates that it might not arrive in 2027, alongside Microsoft's next-gen Xbox consoles. The first party handheld is not the same as other upcoming portable consoles like Asus' Project Kennan, which is still expected to arrive later this year.The company plans to work on optimising Windows 11 to run on handheld consoles, which means that upcoming third party handhelds could arrive with a more optimised version of Microsoft's desktop operating system. In our reviews of previously released handhelds, we've found that some of the biggest issues with Windows running on these devices include poor battery life, navigation issues, and software updates.Microsoft's decision to focus its efforts on Windows 11 for handhelds might have been spurred by the SteamOS' expansion beyond the Steam Deck. Earlier this year, Lenovo unveiled the Legion Go S, which offers better performance than the Steam Deck, and also runs on Valve's operating system. SteamOS is also expected to arrive on similar handheld devices in the future.SteamOS-powered devices won't be Microsoft's only concern, with the Nintendo Switch 2 right around the corner. The Japanese firm's handheld is slated to arrive in select markets in June, and will compete with existing portable consoles.The Redmond company's focus on optimising Windows 11 for handhelds could improve the overall experience of using these devices. Asus is expected to launch its Project Kennan console later in 2025, as per recent reports. The device was recently spotted in a listing on the US FCC website, giving us a good look at its design. The report indicates that the first-party handheld from Microsoft might have been capable of running Xbox games, and the company still plans to launch a native handheld. It's currently unclear whether this device will make its debut in 2027 or 2028, which is when the company's next-gen consoles are expected to arrive. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Microsoft, Xbox Handheld, Xbox, Handheld Consoles David Delima As a writer on technology with Gadgets 360, David Delima is interested in open-source technology, cybersecurity, consumer privacy, and loves to read and write about how the Internet works. David can be contacted via email at DavidD@ndtv.com, on Twitter at @DxDavey, and Mastodon at mstdn.social/@delima. More Related Stories #microsoft039s #xbox #handheld #plans #reportedly
    WWW.GADGETS360.COM
    Microsoft's Xbox Handheld Plans Reportedly Shelved; Company to Optimise Windows 11 Gaming Performance
    Microsoft has paused the development of its Xbox handheld gaming console, according to a report. Previously expected to arrive as part of the company's next generation of consoles, the native Xbox handheld has been put on the back-burner. The Redmond company is reportedly working on optimising Windows 11 for handheld consoles, so that it is on par with Valve's SteamOS, which offers better performance and battery efficiency. Other upcoming consoles, like the Xbox-branded Asus device (codenamed Project Kennan) are said to be unaffected by Microsoft's decision.Microsoft Shifts Focus to Windows 11 Amid Threat From SteamOSWindows Central reports that Microsoft's internal Xbox handheld console has been shelved, which indicates that it might not arrive in 2027, alongside Microsoft's next-gen Xbox consoles. The first party handheld is not the same as other upcoming portable consoles like Asus' Project Kennan, which is still expected to arrive later this year.The company plans to work on optimising Windows 11 to run on handheld consoles, which means that upcoming third party handhelds could arrive with a more optimised version of Microsoft's desktop operating system. In our reviews of previously released handhelds, we've found that some of the biggest issues with Windows running on these devices include poor battery life, navigation issues, and software updates.Microsoft's decision to focus its efforts on Windows 11 for handhelds might have been spurred by the SteamOS' expansion beyond the Steam Deck. Earlier this year, Lenovo unveiled the Legion Go S, which offers better performance than the Steam Deck, and also runs on Valve's operating system. SteamOS is also expected to arrive on similar handheld devices in the future.SteamOS-powered devices won't be Microsoft's only concern, with the Nintendo Switch 2 right around the corner. The Japanese firm's handheld is slated to arrive in select markets in June, and will compete with existing portable consoles.The Redmond company's focus on optimising Windows 11 for handhelds could improve the overall experience of using these devices. Asus is expected to launch its Project Kennan console later in 2025, as per recent reports. The device was recently spotted in a listing on the US FCC website, giving us a good look at its design. The report indicates that the first-party handheld from Microsoft might have been capable of running Xbox games, and the company still plans to launch a native handheld. It's currently unclear whether this device will make its debut in 2027 or 2028, which is when the company's next-gen consoles are expected to arrive. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Microsoft, Xbox Handheld, Xbox, Handheld Consoles David Delima As a writer on technology with Gadgets 360, David Delima is interested in open-source technology, cybersecurity, consumer privacy, and loves to read and write about how the Internet works. David can be contacted via email at DavidD@ndtv.com, on Twitter at @DxDavey, and Mastodon at mstdn.social/@delima. More Related Stories
    0 Comments 0 Shares
  • The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA

    The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA
    How Build A Rocket Boy developed its debut project

    Feature

    by Samuel Roberts
    Editorial Director

    Published on May 30, 2025

    As the producer behind the Grand Theft Auto games from GTA 3 through to GTA 5, as well as Red Dead Redemption and LA Noire, any project with Leslie Benzies' name on it is going to be a lightning rod for attention.
    MindsEye, the first game from Benzies' studio Build A Rocket Boy, is getting plenty of it – even if some of that attention has been less positive.
    MindsEye is a single-player third-person shooter with vehicle gameplay, set in a Las Vegas-style city called Redrock. It's a techno-thriller story about a former soldier called Jacob Diaz – but it's clear from visiting BARB in Edinburgh this week that the game is envisioned as a gateway into something much larger, both in the fiction of MindsEye, and for players who pick the game up.
    That includes a user-generated content platform called Build.MindsEye, where players on PC can create levels using relatively straightforward tools that incorporate any object in the game.
    When asked if third-person shooter levels or driving sections were the limits of the build side of MindsEye, the developers showed other examples of how they can be used, like massively increasing the proportions of a basketball, dropping it into the world, and functionally making an in-game version of Rocket League.
    Still, while MindsEye launches on June 10, 2025, for PC and consoles, many questions remain unanswered, including the future of its long-gestating Everywhere project.
    Benzies sat down with GamesIndustry.biz earlier this week to talk us through his vision for the game.
    This interview has been edited for brevity and clarity.

    Image credit: Austin Hargrave

    What's your grand vision for MindsEye? What will it be at launch, and where is it going in the future?
    MindsEye is one story in an epic universe. The other stories take place at different time periods, anddifferent locations in the universe. This story is Jacob Diaz's story. There are also other stories within MindsEye, so we tell the backstories of other characters Jacob will meet.
    That's the way we're going to fill out the universe over time – so when you travel around, all the stories will be connected by one overarching theme, and each story will have different mechanics. And we'll give these mechanics to players within the creator tools.
    What will happen with the game after launch?will support the game through Play.MindsEye, with continuous new content. Some of the content, like races, are made just for fun. Butmost of the content, we'll try and incorporate it into the story. So once you've played the big overarching ten-year plan, you'll have a very good idea of what this universe looks like.
    We have plans to add multiplayer,we have plans to make a full open world. And of course, we've also got to look at what players are creating, and incorporate that into our plans. Given the ease of the tools, we think there's going to be a high percentage of players who will jump in and give it a pop, see how it feels. Hopefully some will create compelling content we can then promote and make that part of our plans to push to other players.
    Is it best to think of MindsEye as the first game in a series of games? Or one game as part of a larger experience?
    MindsEye sits bang in the middle of our story. So, we're going to go back 10,000 years, then we're going to go forward a certain amount of time. It's the relevant piece of the puzzle that will have players asking questions of what the bigger story is.
    We've intentionally not released footage of huge parts of the game, because we don't want to spoil anything for players. But this story does take some unusual twists.

    What's your vision for the multiplayer component of the game?
    I guess there's two sides to the answer. The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease. So anyone could pick up the game, jump in, drive around, stop at a point where they see something of interest, build a little mission, jump back in the car, drive again, build another mission. Once you've built a couple of hundred of these, you've built your own open world game. So, that's the build side.
    From our side, we want toa place where people can socialise, play together, and engage in the stories that we build. So, we do have plans next year to launch an open world multiplayer game that takes place a year after MindsEye finishes. In the interim, we also have an open world free roam game that spans from when MindsEye finishes to the launch of the open world multiplayer game.
    All of these stories interconnect in a fairly unique and original way, which I think players like these days. They like the complexity of deeper stories.
    You're selling the base game at launch, with a pass for upcoming content additions. Do you have a vision for how you're going to package future stories in the overarching MindsEye experience?
    It depends on the scale of the story. Some will be free, and some will be paid.
    After you left Rockstar Games, what came next? What led to you building the studio?
    I spent a few years looking into some other things: goingsome property development. Using some of the games experience, we made a thing called VR-Chitect, which allowed you to build houses and view them in VR.
    I spent a lot of time in Los Angeles at this point, and this is when the droughts were very bad. I got intothese machines that would suck water out of the air. Still sitting in my back garden in Los Angeles is this big clunky machine, it works like an air-conditioning unit. It could suck up one thousand litres of water. So I got involved with that.
    But there's really nothing like making games. The different types of people – the lawyers, the accountants, the programmers, the artists, the dancers, the singers – that bunch of people in one big pot, all working together, and turning something from a piece of paper intoscreen – that's where I get my excitement.
    Since I was a kid, that's what I've wanted to do. I thought, 'I better get back into making games' because nothing else was as much fun.

    What was the journey towards creating MindsEye as your first standalone release?
    Your first game's always your hardest. You have to build systems, you have to build the team. Everything is new. You don't really see a lot on the screen until way down the line, because you're building underlying systems, physics systems, the gameplay systems.
    It's a slow start, but what you end up with is an engine, and obviously we use Unreal, which provides a certain level of support and building. On top of that, we've got to build our own stuff., we have to pack up everything we build and present it nicely for the creator tools. So it adds this extra layer of complexity to everything. But now, given where we are, the speed that we can iterate, we can very quickly place enemies, place vehicles, place puzzles, whatever, and get a feel for a game.
    We've now got a great, experienced team – a lot of talented guys in there. In the old days, you'd get a game, stick it on the shelf, and you'd wave goodbye. It's not like that anymore. You're continually fixing things.
    When you release a game, you've suddenly got, not a hundred testers, but hopefully millions of testers. You've got to continually fix, continually optimise, and especially with the tools that we've got, we want to continually create new content.
    So MindsEye is a standalone game, and Everywhere is not mentioned anywhere on the Steam page. But obviously there's a strong 'build' component to this game, which was part of the Everywhere pitch. What does this mean for Everywhere, and what was behind the decision to package the game this way?
    This is all part of a bigger story and ecosystem that we've got planned.
    Everywhere is going to show up again pretty soon. Everything we're working on, there's a story behind it – a big overarching story. So Everywhere will come back, and it fits into this story somewhere. I can't tell you, because it would be a spoiler. But that's going to reappear soon, and it will all be a part of the same product.
    "I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'"
    Leslie Benzies, Build A Rocket Boy
    In terms of the tools, the tool doesn't really care what world you're building in. It sits separately. So any game we create, it will naturally work on top of it. But we're big fans of keeping everything thematically connected, or connected through a narrative, and you'll see it.
    The bigger story will become obvious, once you've played through all of MindsEye. Then you might start to see how it all connects together, to the Everywhere world.
    Has the landscape for something like Everywhere, or the build component to MindsEye, changed as platforms like UEFN have taken off or Roblox has become so huge?
    It's great to see these tools being used by people. I build a lot with my son, and when he builds, I see the excitement he gets. It reminds me of when I was a kid with my Dragon 32 computer, managing to get a little character moving on the screen – that excitement of, 'wow, I did that'. Giving that to other people is massive.
    It's still very difficult to build in Roblox. For example, when my son wants to do it, I have to jump in. I used to be a programmer, and I struggle to build in there.
    When he wants to run around and scream with his friends he's in Roblox; when he wants to build he'll jump into Minecraft, because Minecraft is a much easier system to build within. And I think we sit somewhere in the middle: you can get very high quality, fun games, but they're very easy to build.
    I think we're at the infancy of this in video games. We're at the very beginning of it, and we're going to see way, way more of it. It doesn't necessarily have to be presenting it to your friends, or to an audience. I think the process of creating for a human being is fun in itself.

    MindsEye has been positioned as a linear game. You are best known for creating open world games. What was behind the decision to make MindsEye a more linear, narrative-driven experience?
    I think certain stories are more difficult to present to players in an open world setting. Open world gives you freedom – you don't necessarily want freedom to portray a story. For MindsEye, it's a very set time in a character, Jacob Diaz's, life. You pick up as Jacob when he arrives in Redrock, and then you leave Jacob at a certain point in the future.
    And so, it'd be very difficult for us to have an open world in there. It's horses for courses: it depends what you're doing. But for Jacob's story, it had to be a linear game.
    Having said that, there are open world experiences in there, and we can build them through Build.MindsEye. There is a free roam open world mode, where you playa different character and you see his time, from the end of MindsEye, to the point of our next big planned launch.
    Again, they're all connected through a narrative, and we really want to show the universe, show the stories that have taken place in the universe, the characters in that universe, and see how they've experienced the same experience but from different viewpoints.
    "The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease"
    Leslie Benzies, Build A Rocket Boy
    Was there ever a discussion about creating a more traditional GTA competitor?
    In design, you look at a lot of different options.
    I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'. I'm not sure that would be the best business decision to make. We went through a bunch of different designs, and to tell our story, this is what we landed on.
    MindsEye is priced more like a game from a decade ago at and it'll take around 20 hours to finish. Can you talk about how you settled on the game's length and scope, and how you made that decision around price?
    So you've got the MindsEye campaign, and yes, it'll be about 20ish hours. But you do have all this other side content: there's going to be this continuous stream of content.
    These days, there are so many different options for people. It's not just games: there's streaming TV, so many good shows out there. I don't think you can have filler content in games. I think people want the meat, and they want the potatoes. We've tried to make as much meat as we can, if that makes sense.
    I think that's a good length for a game. What you also find through data, is thatbig games, people don't play them all. The majority of people – 60% or 70% of people – don't actually play games to the end.
    So when you're making something, I would prefer – I'm sure the team would say the same –you had the whole experience from start to finish, and not create this 200-hour game. Create something that is finishable, but have some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters' back stories, or do fill out what was happening in the world.
    On price: the world's in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult.

    I was curious why you waited until quite late in the day to reveal the build element of the game, only because it seemed you were being quite church and state with how MindsEye is releasing versus what Everywhere is.
    So in general, we believe – and again, it goes back to the amount of information, the amount of options people have these days – I don't think you can have extended marketing times. It's very expensive, we're a start-up. I think you lose interest from people.
    There are so many things for people to do, that if you extend it, you're not punching through to the place you need to be.
    I've seen other games, nine years before launch, it's getting talked about. I'm not sure that's the way of the world these days. You'll see there are games that never go to market: the day of launch was the marketing campaign, and it worked very well. So I think we tried to compress ours down for that reason.
    On the MindsEye.Playpart of it, yeah, maybe we should've got that out there sooner, but it is a nice little surprise to give players.
    That's the thing with marketing – you never know what's the right or wrong way to do it, you've got to go with your gut, your senses, and test it.
    Being who you are, it brings a certain level of expectation and attention. Do you find it a double-edged sword, launching a new studio and launching a new game, with your background?
    Yes. There's always comparisons, and I think that's how humans work.
    As kids, we're taught to put a triangle into a triangular hole, and a square into a square hole. I think we do that for the rest of our lives, and we like to describe something new as 'it's X plus Y, with a bit of Z in there'. It makes things easy for us. It's maybe humans optimising the way we communicate.
    So there are comparisons. It serves us well in some ways, it doesn't serve us well in others. Dave Grohl said it well when he formed the Foo Fighters: nobody's interested in the Foo Fighters, all they were interested in was Nirvana.
    The guys have built something very cool, and I just hope people can see it for what it's trying to be.
    #big #leslie #benzies #interview #mindseye
    The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA
    The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA How Build A Rocket Boy developed its debut project Feature by Samuel Roberts Editorial Director Published on May 30, 2025 As the producer behind the Grand Theft Auto games from GTA 3 through to GTA 5, as well as Red Dead Redemption and LA Noire, any project with Leslie Benzies' name on it is going to be a lightning rod for attention. MindsEye, the first game from Benzies' studio Build A Rocket Boy, is getting plenty of it – even if some of that attention has been less positive. MindsEye is a single-player third-person shooter with vehicle gameplay, set in a Las Vegas-style city called Redrock. It's a techno-thriller story about a former soldier called Jacob Diaz – but it's clear from visiting BARB in Edinburgh this week that the game is envisioned as a gateway into something much larger, both in the fiction of MindsEye, and for players who pick the game up. That includes a user-generated content platform called Build.MindsEye, where players on PC can create levels using relatively straightforward tools that incorporate any object in the game. When asked if third-person shooter levels or driving sections were the limits of the build side of MindsEye, the developers showed other examples of how they can be used, like massively increasing the proportions of a basketball, dropping it into the world, and functionally making an in-game version of Rocket League. Still, while MindsEye launches on June 10, 2025, for PC and consoles, many questions remain unanswered, including the future of its long-gestating Everywhere project. Benzies sat down with GamesIndustry.biz earlier this week to talk us through his vision for the game. This interview has been edited for brevity and clarity. Image credit: Austin Hargrave What's your grand vision for MindsEye? What will it be at launch, and where is it going in the future? MindsEye is one story in an epic universe. The other stories take place at different time periods, anddifferent locations in the universe. This story is Jacob Diaz's story. There are also other stories within MindsEye, so we tell the backstories of other characters Jacob will meet. That's the way we're going to fill out the universe over time – so when you travel around, all the stories will be connected by one overarching theme, and each story will have different mechanics. And we'll give these mechanics to players within the creator tools. What will happen with the game after launch?will support the game through Play.MindsEye, with continuous new content. Some of the content, like races, are made just for fun. Butmost of the content, we'll try and incorporate it into the story. So once you've played the big overarching ten-year plan, you'll have a very good idea of what this universe looks like. We have plans to add multiplayer,we have plans to make a full open world. And of course, we've also got to look at what players are creating, and incorporate that into our plans. Given the ease of the tools, we think there's going to be a high percentage of players who will jump in and give it a pop, see how it feels. Hopefully some will create compelling content we can then promote and make that part of our plans to push to other players. Is it best to think of MindsEye as the first game in a series of games? Or one game as part of a larger experience? MindsEye sits bang in the middle of our story. So, we're going to go back 10,000 years, then we're going to go forward a certain amount of time. It's the relevant piece of the puzzle that will have players asking questions of what the bigger story is. We've intentionally not released footage of huge parts of the game, because we don't want to spoil anything for players. But this story does take some unusual twists. What's your vision for the multiplayer component of the game? I guess there's two sides to the answer. The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease. So anyone could pick up the game, jump in, drive around, stop at a point where they see something of interest, build a little mission, jump back in the car, drive again, build another mission. Once you've built a couple of hundred of these, you've built your own open world game. So, that's the build side. From our side, we want toa place where people can socialise, play together, and engage in the stories that we build. So, we do have plans next year to launch an open world multiplayer game that takes place a year after MindsEye finishes. In the interim, we also have an open world free roam game that spans from when MindsEye finishes to the launch of the open world multiplayer game. All of these stories interconnect in a fairly unique and original way, which I think players like these days. They like the complexity of deeper stories. You're selling the base game at launch, with a pass for upcoming content additions. Do you have a vision for how you're going to package future stories in the overarching MindsEye experience? It depends on the scale of the story. Some will be free, and some will be paid. After you left Rockstar Games, what came next? What led to you building the studio? I spent a few years looking into some other things: goingsome property development. Using some of the games experience, we made a thing called VR-Chitect, which allowed you to build houses and view them in VR. I spent a lot of time in Los Angeles at this point, and this is when the droughts were very bad. I got intothese machines that would suck water out of the air. Still sitting in my back garden in Los Angeles is this big clunky machine, it works like an air-conditioning unit. It could suck up one thousand litres of water. So I got involved with that. But there's really nothing like making games. The different types of people – the lawyers, the accountants, the programmers, the artists, the dancers, the singers – that bunch of people in one big pot, all working together, and turning something from a piece of paper intoscreen – that's where I get my excitement. Since I was a kid, that's what I've wanted to do. I thought, 'I better get back into making games' because nothing else was as much fun. What was the journey towards creating MindsEye as your first standalone release? Your first game's always your hardest. You have to build systems, you have to build the team. Everything is new. You don't really see a lot on the screen until way down the line, because you're building underlying systems, physics systems, the gameplay systems. It's a slow start, but what you end up with is an engine, and obviously we use Unreal, which provides a certain level of support and building. On top of that, we've got to build our own stuff., we have to pack up everything we build and present it nicely for the creator tools. So it adds this extra layer of complexity to everything. But now, given where we are, the speed that we can iterate, we can very quickly place enemies, place vehicles, place puzzles, whatever, and get a feel for a game. We've now got a great, experienced team – a lot of talented guys in there. In the old days, you'd get a game, stick it on the shelf, and you'd wave goodbye. It's not like that anymore. You're continually fixing things. When you release a game, you've suddenly got, not a hundred testers, but hopefully millions of testers. You've got to continually fix, continually optimise, and especially with the tools that we've got, we want to continually create new content. So MindsEye is a standalone game, and Everywhere is not mentioned anywhere on the Steam page. But obviously there's a strong 'build' component to this game, which was part of the Everywhere pitch. What does this mean for Everywhere, and what was behind the decision to package the game this way? This is all part of a bigger story and ecosystem that we've got planned. Everywhere is going to show up again pretty soon. Everything we're working on, there's a story behind it – a big overarching story. So Everywhere will come back, and it fits into this story somewhere. I can't tell you, because it would be a spoiler. But that's going to reappear soon, and it will all be a part of the same product. "I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'" Leslie Benzies, Build A Rocket Boy In terms of the tools, the tool doesn't really care what world you're building in. It sits separately. So any game we create, it will naturally work on top of it. But we're big fans of keeping everything thematically connected, or connected through a narrative, and you'll see it. The bigger story will become obvious, once you've played through all of MindsEye. Then you might start to see how it all connects together, to the Everywhere world. Has the landscape for something like Everywhere, or the build component to MindsEye, changed as platforms like UEFN have taken off or Roblox has become so huge? It's great to see these tools being used by people. I build a lot with my son, and when he builds, I see the excitement he gets. It reminds me of when I was a kid with my Dragon 32 computer, managing to get a little character moving on the screen – that excitement of, 'wow, I did that'. Giving that to other people is massive. It's still very difficult to build in Roblox. For example, when my son wants to do it, I have to jump in. I used to be a programmer, and I struggle to build in there. When he wants to run around and scream with his friends he's in Roblox; when he wants to build he'll jump into Minecraft, because Minecraft is a much easier system to build within. And I think we sit somewhere in the middle: you can get very high quality, fun games, but they're very easy to build. I think we're at the infancy of this in video games. We're at the very beginning of it, and we're going to see way, way more of it. It doesn't necessarily have to be presenting it to your friends, or to an audience. I think the process of creating for a human being is fun in itself. MindsEye has been positioned as a linear game. You are best known for creating open world games. What was behind the decision to make MindsEye a more linear, narrative-driven experience? I think certain stories are more difficult to present to players in an open world setting. Open world gives you freedom – you don't necessarily want freedom to portray a story. For MindsEye, it's a very set time in a character, Jacob Diaz's, life. You pick up as Jacob when he arrives in Redrock, and then you leave Jacob at a certain point in the future. And so, it'd be very difficult for us to have an open world in there. It's horses for courses: it depends what you're doing. But for Jacob's story, it had to be a linear game. Having said that, there are open world experiences in there, and we can build them through Build.MindsEye. There is a free roam open world mode, where you playa different character and you see his time, from the end of MindsEye, to the point of our next big planned launch. Again, they're all connected through a narrative, and we really want to show the universe, show the stories that have taken place in the universe, the characters in that universe, and see how they've experienced the same experience but from different viewpoints. "The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease" Leslie Benzies, Build A Rocket Boy Was there ever a discussion about creating a more traditional GTA competitor? In design, you look at a lot of different options. I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'. I'm not sure that would be the best business decision to make. We went through a bunch of different designs, and to tell our story, this is what we landed on. MindsEye is priced more like a game from a decade ago at and it'll take around 20 hours to finish. Can you talk about how you settled on the game's length and scope, and how you made that decision around price? So you've got the MindsEye campaign, and yes, it'll be about 20ish hours. But you do have all this other side content: there's going to be this continuous stream of content. These days, there are so many different options for people. It's not just games: there's streaming TV, so many good shows out there. I don't think you can have filler content in games. I think people want the meat, and they want the potatoes. We've tried to make as much meat as we can, if that makes sense. I think that's a good length for a game. What you also find through data, is thatbig games, people don't play them all. The majority of people – 60% or 70% of people – don't actually play games to the end. So when you're making something, I would prefer – I'm sure the team would say the same –you had the whole experience from start to finish, and not create this 200-hour game. Create something that is finishable, but have some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters' back stories, or do fill out what was happening in the world. On price: the world's in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult. I was curious why you waited until quite late in the day to reveal the build element of the game, only because it seemed you were being quite church and state with how MindsEye is releasing versus what Everywhere is. So in general, we believe – and again, it goes back to the amount of information, the amount of options people have these days – I don't think you can have extended marketing times. It's very expensive, we're a start-up. I think you lose interest from people. There are so many things for people to do, that if you extend it, you're not punching through to the place you need to be. I've seen other games, nine years before launch, it's getting talked about. I'm not sure that's the way of the world these days. You'll see there are games that never go to market: the day of launch was the marketing campaign, and it worked very well. So I think we tried to compress ours down for that reason. On the MindsEye.Playpart of it, yeah, maybe we should've got that out there sooner, but it is a nice little surprise to give players. That's the thing with marketing – you never know what's the right or wrong way to do it, you've got to go with your gut, your senses, and test it. Being who you are, it brings a certain level of expectation and attention. Do you find it a double-edged sword, launching a new studio and launching a new game, with your background? Yes. There's always comparisons, and I think that's how humans work. As kids, we're taught to put a triangle into a triangular hole, and a square into a square hole. I think we do that for the rest of our lives, and we like to describe something new as 'it's X plus Y, with a bit of Z in there'. It makes things easy for us. It's maybe humans optimising the way we communicate. So there are comparisons. It serves us well in some ways, it doesn't serve us well in others. Dave Grohl said it well when he formed the Foo Fighters: nobody's interested in the Foo Fighters, all they were interested in was Nirvana. The guys have built something very cool, and I just hope people can see it for what it's trying to be. #big #leslie #benzies #interview #mindseye
    WWW.GAMESINDUSTRY.BIZ
    The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA
    The big Leslie Benzies interview: MindsEye, Everywhere, and the double-edged sword of GTA How Build A Rocket Boy developed its debut project Feature by Samuel Roberts Editorial Director Published on May 30, 2025 As the producer behind the Grand Theft Auto games from GTA 3 through to GTA 5, as well as Red Dead Redemption and LA Noire, any project with Leslie Benzies' name on it is going to be a lightning rod for attention. MindsEye, the first game from Benzies' studio Build A Rocket Boy, is getting plenty of it – even if some of that attention has been less positive. MindsEye is a single-player third-person shooter with vehicle gameplay, set in a Las Vegas-style city called Redrock. It's a techno-thriller story about a former soldier called Jacob Diaz – but it's clear from visiting BARB in Edinburgh this week that the game is envisioned as a gateway into something much larger, both in the fiction of MindsEye, and for players who pick the game up. That includes a user-generated content platform called Build.MindsEye, where players on PC can create levels using relatively straightforward tools that incorporate any object in the game. When asked if third-person shooter levels or driving sections were the limits of the build side of MindsEye, the developers showed other examples of how they can be used, like massively increasing the proportions of a basketball, dropping it into the world, and functionally making an in-game version of Rocket League. Still, while MindsEye launches on June 10, 2025, for PC and consoles, many questions remain unanswered, including the future of its long-gestating Everywhere project. Benzies sat down with GamesIndustry.biz earlier this week to talk us through his vision for the game. This interview has been edited for brevity and clarity. Image credit: Austin Hargrave What's your grand vision for MindsEye? What will it be at launch, and where is it going in the future? MindsEye is one story in an epic universe. The other stories take place at different time periods, and [at] different locations in the universe. This story is Jacob Diaz's story. There are also other stories within MindsEye, so we tell the backstories of other characters Jacob will meet. That's the way we're going to fill out the universe over time – so when you travel around, all the stories will be connected by one overarching theme, and each story will have different mechanics. And we'll give these mechanics to players within the creator tools. What will happen with the game after launch? [The studio] will support the game through Play.MindsEye, with continuous new content. Some of the content, like races, are made just for fun. But [with] most of the content, we'll try and incorporate it into the story. So once you've played the big overarching ten-year plan, you'll have a very good idea of what this universe looks like. We have plans to add multiplayer, [and] we have plans to make a full open world. And of course, we've also got to look at what players are creating, and incorporate that into our plans. Given the ease of the tools, we think there's going to be a high percentage of players who will jump in and give it a pop, see how it feels. Hopefully some will create compelling content we can then promote and make that part of our plans to push to other players. Is it best to think of MindsEye as the first game in a series of games? Or one game as part of a larger experience? MindsEye sits bang in the middle of our story. So, we're going to go back 10,000 years, then we're going to go forward a certain amount of time. It's the relevant piece of the puzzle that will have players asking questions of what the bigger story is. We've intentionally not released footage of huge parts of the game, because we don't want to spoil anything for players. But this story does take some unusual twists. What's your vision for the multiplayer component of the game? I guess there's two sides to the answer. The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease. So anyone could pick up the game, jump in, drive around, stop at a point where they see something of interest, build a little mission, jump back in the car, drive again, build another mission. Once you've built a couple of hundred of these, you've built your own open world game. So, that's the build side. From our side, we want to [create] a place where people can socialise, play together, and engage in the stories that we build. So, we do have plans next year to launch an open world multiplayer game that takes place a year after MindsEye finishes. In the interim, we also have an open world free roam game that spans from when MindsEye finishes to the launch of the open world multiplayer game. All of these stories interconnect in a fairly unique and original way, which I think players like these days. They like the complexity of deeper stories. You're selling the base game at launch, with a pass for upcoming content additions. Do you have a vision for how you're going to package future stories in the overarching MindsEye experience? It depends on the scale of the story. Some will be free, and some will be paid. After you left Rockstar Games, what came next? What led to you building the studio? I spent a few years looking into some other things: going [into] some property development. Using some of the games experience, we made a thing called VR-Chitect, which allowed you to build houses and view them in VR. I spent a lot of time in Los Angeles at this point, and this is when the droughts were very bad. I got into [making] these machines that would suck water out of the air. Still sitting in my back garden in Los Angeles is this big clunky machine, it works like an air-conditioning unit. It could suck up one thousand litres of water. So I got involved with that. But there's really nothing like making games. The different types of people – the lawyers, the accountants, the programmers, the artists, the dancers, the singers – that bunch of people in one big pot, all working together, and turning something from a piece of paper into [something on the] screen – that's where I get my excitement. Since I was a kid, that's what I've wanted to do. I thought, 'I better get back into making games' because nothing else was as much fun. What was the journey towards creating MindsEye as your first standalone release? Your first game's always your hardest. You have to build systems, you have to build the team. Everything is new. You don't really see a lot on the screen until way down the line, because you're building underlying systems, physics systems, the gameplay systems. It's a slow start, but what you end up with is an engine, and obviously we use Unreal, which provides a certain level of support and building. On top of that, we've got to build our own stuff. [Plus], we have to pack up everything we build and present it nicely for the creator tools. So it adds this extra layer of complexity to everything. But now, given where we are, the speed that we can iterate, we can very quickly place enemies, place vehicles, place puzzles, whatever, and get a feel for a game. We've now got a great, experienced team – a lot of talented guys in there. In the old days, you'd get a game, stick it on the shelf, and you'd wave goodbye. It's not like that anymore. You're continually fixing things. When you release a game, you've suddenly got, not a hundred testers, but hopefully millions of testers. You've got to continually fix, continually optimise, and especially with the tools that we've got, we want to continually create new content. So MindsEye is a standalone game, and Everywhere is not mentioned anywhere on the Steam page. But obviously there's a strong 'build' component to this game, which was part of the Everywhere pitch. What does this mean for Everywhere, and what was behind the decision to package the game this way? This is all part of a bigger story and ecosystem that we've got planned. Everywhere is going to show up again pretty soon. Everything we're working on, there's a story behind it – a big overarching story. So Everywhere will come back, and it fits into this story somewhere. I can't tell you [where], because it would be a spoiler. But that's going to reappear soon, and it will all be a part of the same product. "I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'" Leslie Benzies, Build A Rocket Boy In terms of the tools, the tool doesn't really care what world you're building in. It sits separately. So any game we create, it will naturally work on top of it. But we're big fans of keeping everything thematically connected, or connected through a narrative, and you'll see it. The bigger story will become obvious, once you've played through all of MindsEye. Then you might start to see how it all connects together, to the Everywhere world. Has the landscape for something like Everywhere, or the build component to MindsEye, changed as platforms like UEFN have taken off or Roblox has become so huge? It's great to see these tools being used by people. I build a lot with my son, and when he builds, I see the excitement he gets. It reminds me of when I was a kid with my Dragon 32 computer, managing to get a little character moving on the screen – that excitement of, 'wow, I did that'. Giving that to other people is massive. It's still very difficult to build in Roblox. For example, when my son wants to do it, I have to jump in. I used to be a programmer, and I struggle to build in there. When he wants to run around and scream with his friends he's in Roblox; when he wants to build he'll jump into Minecraft, because Minecraft is a much easier system to build within. And I think we sit somewhere in the middle: you can get very high quality, fun games, but they're very easy to build. I think we're at the infancy of this in video games. We're at the very beginning of it, and we're going to see way, way more of it. It doesn't necessarily have to be presenting it to your friends, or to an audience. I think the process of creating for a human being is fun in itself. MindsEye has been positioned as a linear game. You are best known for creating open world games. What was behind the decision to make MindsEye a more linear, narrative-driven experience? I think certain stories are more difficult to present to players in an open world setting. Open world gives you freedom – you don't necessarily want freedom to portray a story. For MindsEye, it's a very set time in a character, Jacob Diaz's, life. You pick up as Jacob when he arrives in Redrock, and then you leave Jacob at a certain point in the future. And so, it'd be very difficult for us to have an open world in there. It's horses for courses: it depends what you're doing. But for Jacob's story, it had to be a linear game. Having said that, there are open world experiences in there, and we can build them through Build.MindsEye. There is a free roam open world mode, where you play [as] a different character and you see his time, from the end of MindsEye, to the point of our next big planned launch. Again, they're all connected through a narrative, and we really want to show the universe, show the stories that have taken place in the universe, the characters in that universe, and see how they've experienced the same experience but from different viewpoints. "The dream from the building side is to allow players the opportunity to create their own multiplayer open world games with ease" Leslie Benzies, Build A Rocket Boy Was there ever a discussion about creating a more traditional GTA competitor? In design, you look at a lot of different options. I'm not sure it would've been smart as a company to say, 'we are going to compete with the biggest game on the planet'. I'm not sure that would be the best business decision to make. We went through a bunch of different designs, and to tell our story, this is what we landed on. MindsEye is priced more like a game from a decade ago at $60, and it'll take around 20 hours to finish. Can you talk about how you settled on the game's length and scope, and how you made that decision around price? So you've got the MindsEye campaign, and yes, it'll be about 20ish hours. But you do have all this other side content: there's going to be this continuous stream of content. These days, there are so many different options for people. It's not just games: there's streaming TV, so many good shows out there. I don't think you can have filler content in games. I think people want the meat, and they want the potatoes. We've tried to make as much meat as we can, if that makes sense. I think that's a good length for a game. What you also find through data, is that [with] big games, people don't play them all. The majority of people – 60% or 70% of people – don't actually play games to the end. So when you're making something, I would prefer – I'm sure the team would say the same – [that] you had the whole experience from start to finish, and not create this 200-hour game. Create something that is finishable, but have some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters' back stories, or do fill out what was happening in the world. On price: the world's in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult. I was curious why you waited until quite late in the day to reveal the build element of the game, only because it seemed you were being quite church and state with how MindsEye is releasing versus what Everywhere is. So in general, we believe – and again, it goes back to the amount of information, the amount of options people have these days – I don't think you can have extended marketing times. It's very expensive, we're a start-up. I think you lose interest from people. There are so many things for people to do, that if you extend it, you're not punching through to the place you need to be. I've seen other games, nine years before launch, it's getting talked about. I'm not sure that's the way of the world these days. You'll see there are games that never go to market: the day of launch was the marketing campaign, and it worked very well. So I think we tried to compress ours down for that reason. On the MindsEye.Play [continuous content] part of it, yeah, maybe we should've got that out there sooner, but it is a nice little surprise to give players. That's the thing with marketing – you never know what's the right or wrong way to do it, you've got to go with your gut, your senses, and test it. Being who you are, it brings a certain level of expectation and attention. Do you find it a double-edged sword, launching a new studio and launching a new game, with your background? Yes. There's always comparisons, and I think that's how humans work. As kids, we're taught to put a triangle into a triangular hole, and a square into a square hole. I think we do that for the rest of our lives, and we like to describe something new as 'it's X plus Y, with a bit of Z in there'. It makes things easy for us. It's maybe humans optimising the way we communicate. So there are comparisons. It serves us well in some ways, it doesn't serve us well in others. Dave Grohl said it well when he formed the Foo Fighters: nobody's interested in the Foo Fighters, all they were interested in was Nirvana. The guys have built something very cool, and I just hope people can see it for what it's trying to be.
    0 Comments 0 Shares
  • Interview: Rom Kosla, CIO, Hewlett Packard Enterprise

    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector.
    “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.”
    Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations.
    “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.”
    Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte.
    “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.”

    The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out?
    “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.”
    Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets.
    “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE.
    “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’”
    HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets.
    Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities.
    “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.”

    Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes.
    “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot.
    Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance.

    “We spend time focusing on which areas offer the right to play and the right to win”
    Rom Kosla, Hewlett Packard Enterprise

    “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.”
    Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents.
    “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says.
    “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.”

    Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes.
    “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.”
    The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.”
    Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area.
    “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says.
    “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.”
    Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points.
    “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.”

    Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says.
    “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.”
    Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists.
    More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.”

    interviews with tech company IT leaders

    Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients.
    Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology.
    Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    #interview #rom #kosla #cio #hewlett
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users. #interview #rom #kosla #cio #hewlett
    WWW.COMPUTERWEEKLY.COM
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise (HPE), joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligence (AI) and data. He also oversees e-commerce, app development, enterprise resource planning (ERP) and security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately $350m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” Read more interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    0 Comments 0 Shares
  • Rooms in the Elephant: Feix&Merlin’s restoration of Walworth Town Hall

    On a sunny spring morning in south London, Walworth Square offers a freshly minted moment of respite from the clamorous main road. Around a peculiar new war memorialnew trees shiver in the breeze, while, beneath them, a man, seemingly the worse for wear, stares vacantly at his scruffy shoes. Another man with enormous shoulders emerges from a gym and begins to take selfies.
    Across the square, steps rise to the grand Victorian jumble of Walworth Town Hall, which hasn’t been a town hall since the mid-1960s. Now, thanks to a fire and the near-bankruptcy of local government, the building houses offices, a café and a community centre. The architect of this transformation, Feix&Merlin, has had to negotiate a problematic inheritance – alandmark, catastrophic fire damage, impecunious owners and angry locals – and knead it into shape. In this they have succeeded, but the shape that it has assumed will, through no fault of the architects, prove indigestible to some. 
    The kernel of the extant structure was built as a church vestry in 1865. It later became Southwark Town Hall and was variously extended. Following the council’s evacuation to Camberwell in 1965, what remained was a public library, a local museum and municipal offices. In 2013 the roof caught fire and much of the Grade II-listed building was reduced to a shell; the remainder rotted behind hoardings until 2022, when work finally commenced on its restoration. Advertisement

    The protracted nature of this process can ultimately be attributed to chancellor George Osborne’s austerity budget of 2010. Although Southwark had at first intended to return the building to its original uses – and held a competition on this basis in 2015, which was won by Avanti Architects – it realised, on seeing the price tag, that this would be impossible. Avanti was dismissed and a new competition was held in 2018, with a revised brief. This emphasised the long-term commercial sustainability of the building, as well as an element of cultural use, taking into consideration the needs of the local community. The winners were developer General Projects working with Feix&Merlin.
    Their main gambit was to turn the building over to offices. However, on consulting the public while working up their proposal, they quickly realised how upset local residents were about the loss of public ownership. As a result, a community centre was added to the programme. It was the task of the designers to square this circle: how to retain the look and feel of a public building while optimising its new private function. They returned the exterior of the protected structure to its original form, including restoring the pattern to the roof tiles, which had been lost over the years. The ground floor houses the remaining public, or publicly accessible, spaces – the lobby café and community centre. The latter can be hired free of charge by local groups. The rest of the building is now offices. These also occupy its grandest rooms: the former main stair, debating chamber, library and museum. The last two functions have been transferred to a new building across the square, where they are housed in a new ‘heritage centre’.
    The architects have restored the historically significant interiors, more or less, removing the institutional accretions that had latterly defaced them, such as asphalt that had been laid on top of the masonry stairs and the false ceiling that hid the skylight above it. They also exposed the boxed-in balustrades on the mezzanine of the library and restored the parquet flooring throughout. All structural interventions have been achieved using cross-laminated timber. The roof has been reconstructed using it, creating a new storey with some intriguing windowless cubby holes inside its terminal turrets. In the former debating chamber, the structure of the roof is exposed to view, a striking piece of engineering. On the ground floor, the ceiling of the space that now houses the lobby-cum-café, which had fallen in during the fire, is supported by hefty wooden arches.
    In some places, the architects have made looser interpretations of the original fabric. The public viewing gallery of the debating chamber has been extended to cover three sides of the room and a pattern derived from the lost balustrade has been cut into sheet steel to create a protective barrier for this new mezzanine. Certain elements, especially in the less important interiors, have been preserved as the fire left them. Where internal walls were removed, their footprint remains, breaking up the parquet, so that, as Julia Feix puts it, visitors can still read the original plan. Above a painted dado, the pitted and scorched surface of the old plaster, or the bricks exposed beneath it, have been preserved in their damaged state. Feix says this approach ‘lets the building talk about its history, rather than creating a pastiche of an era that’s long gone’.
     This move has by now become an established procedure when dealing with rescue jobs, the obvious local example being Battersea Arts Centre, which Haworth Tompkins left similarly scarred following a 2015 fire. Its antecedents stretch back to Hans Döllgast’s post-war work on the Alte Pinakothek in Munich. In its more recent manifestations we could call this approach a fetishisation of decay, which raises questions as to what is being commemorated, and why. In Döllgast’s case, the answers were obvious: the Second World War, in order to prevent wilful amnesia. But in these two more recent examples, where the catastrophes in question were accidental fires, one might ask why a coat of plaster shouldn’t have been applied. Advertisement

    Walworth Town Hall helps to clarify the logic at work here, which is partly born of necessity. The building could not be restored to its previous condition or use, to the dismay of some locals, including the Walworth Society heritage group. The latter objected to the perceived loss of public access and was concerned that what remained could easily be revoked: for instance, if the café were unprofitable, it could be turned into more offices. They also disliked certain architectural aspects of the proposal, which they called ‘generic’: ‘neither bold and confident designs nor faithful restorations’. After protracted consultation, these concerns were taken into consideration by the architects in the restoration of the more significant rooms. Given the wrangling, it seems to me that, as in the case of Flores & Prats’ Sala Beckett in Barcelona, these patinated surfaces are intended to produce an impression of authenticity, without recourse torestoration, or ‘pastiche’, as the architects put it. It seems likely, however, that this code speaks more clearly to designers than to members of heritage groups. 
    But buildings are not made for heritage groups. Instead, this one is addressing two distinct publics. The community centre still opens to the Walworth Road, with its enduringly working-class character, and has already seen good use. However, the commercial part of the building has been reoriented to the new square to the north, from which it is accessed via the steps we traversed earlier. On the other side of the square rise the brick-slip-clad southern reaches of Elephant Park, the controversial development built by Lendlease on the rubble of the Heygate Estate. The Town Hall has turned its new face to these new Elephantines, the gym-dwellers who can afford to eat in the café and might choose to rent desk space here. To return to my earlier question regarding the catastrophe being commemorated by these charred walls, perhaps the answer is: the conflagration of local government, which produced this double-headed building.
    Tom Wilkinson is a writer, editor and teacher specialising in the history of architecture and the visual culture of modern Germany
    Architect’s view
    As architects, we often aim to deliver transformational change, but at Walworth Town Hall, transformation came through restraint. Rather than imposing a vision, we allowed the building to speak, guiding us in knowing where to intervene and where to hold back.
    One key move was the reinvention of the former debating chamber into a light-filled triple-height space. Historical features were carefully restored, while a new mezzanine with a CNC-cut solid steel balustrade subtly echoes the original decorative railings of the former viewing gallery. The space is now crowned with a new exposed CLT timber roof with a bespoke light feature at its centre. All new structural and architectural elements were executed in timber, speaking to the sustainability agenda, aligning with modern environmental standards and enhancing user wellbeing. Timber’s biophilic properties connect occupants with nature, supporting physical and mental health while improving air quality.
    Crucial to our design language was an honest celebration of the building’s history, including the fire-damaged ‘scars’ that tell its story. While a handful of spaces were traditionally restored, most were approached with a light touch. New finishes were installed only up to the lower dado level, with the rest of the wall surfaces and ceilings left as found, retaining their battle-worn character. Subtle material changes, such as microcement infills in the parquet, hint at the former wall layouts and structures.
    Striking a balance between restoration and contemporary intervention was essential. It has been a privilege to work on a building with such legacy and seeing the community return after more than a decade is deeply rewarding. Walworth Town Hall now honours its past while looking boldly to the future.
    Julia Feix, director, Feix&Merlin Architects

     
    Client’s view
    We approached this project with a vision for developing a new blueprint for bringing at-risk municipal landmarks back to life. Now restored to its former glory and removed from Historic England’s Heritage at Risk register, Walworth Town Hall has been given back to a new generation with an exciting new purpose, made viable and fit for modern standards. In partnership with Southwark Council, and closely collaborating with Historic England and local community groups, we worked with Feix&Merlin to deliver a sensitive but impactful design approach.
    Our vision was that the building’s legacy should be revealed, rather than erased. The result strikes a balance between celebrating its inherited state and adapting it to modern use, combining elements of old and new by making sympathetic references to its beautiful 19th century architecture. Distinctly modern features, such as the use of cross-laminated timber to replace sections of the building damaged by the 2013 fire, are a reflective and contemporary interpretation of the original design. Elephant and Castle is undergoing a significant regeneration and Walworth Town Hall functions as a bridge between the area’s authentic heritage and its new future. Driven by a collaborative process, and tailor-made for small businesses to create, inspire and thrive, the reimagined Walworth Town Hall lays the groundwork for a new creative community to grow in this local destination. 
    Frederic Schwass, chief development officer, General Projects

     
    Engineer’s view
    Heyne Tillett Steel was engaged as structural and civil engineer from competition stage to completion. It was both a challenging restoration of a listed building and an ambitious contemporary reconstruction, in exposed engineered timber, of its pre-fire form – at the same time creating better connectivity and adding floor area.  
    Built in various stages, the existing comprises nearly all methods of historic construction: timber, masonry, filler joist, clay pot, cast and wrought iron. The building had to be extensively investigated to understand its condition, fitness for reuse and, in some cases, capacity to extend.   Particular attention was paid to the impact of the fire and fire dousing in terms of movement, rot and corrosion. Repairs were carried out only where necessary after an extended period of drying and monitoring.
    The original council chamber roof was rebuilt as hybrid trussesto span the approximately 13 x 13m double-height volume below. The roof was prefabricated in just four pieces, built off the retained walls and installed in under two weeks.  A cross-laminated timbercovering creates the roof’s truncated pyramid shape.
    A new floor was added within the original massing of the west wing, utilising CLT slabs and a glulam ‘spine’ beam, creating unobstructed, exposed CLT ceilings across 7m bays at either side. The significant amount of retention and timber additions mean that the project scores very highly on benchmarks for embodied carbon, competitive beyond 2030 targets.
    Jonathan Flint, senior associate, Heyne Tillett Steel

     
    Working detail
    The restoration presented a rare opportunity to reimagine a historic structure using sustainable, expressive materials. The original council chamber roof, destroyed by fire, was rebuilt as a hybrid CLT/glulam and steel ties structure, combining the aesthetic warmth of timber with the tensile strength of steel. The new roof had to clear-span approximately 13 x 13m over a double-height volume, and as the truncated pyramid structure was kept exposed, the increased volume of the space added a dramatic effect while introducing a contemporary character.
    Timber was selected not only for its sustainability credentials but also for its light  weight, crucial in minimising loads on the existing retained masonry. The trusses were prefabricated offsite in four large components, ensuring precision and reducing construction time and disruption on site. Once craned into position, they were set atop an existing concrete ring beam, a structural necessity installed after the fire to stabilise the perimeter walls in the absence of a roof. This ring beam now discreetly supports the new load paths. The combination of the timber structure in combination with the exposed brick and traditional plaster achieves a visually striking, materially honest reconstruction that honours the building’s historic proportions while firmly rooting it in contemporary sustainable practice.
    Julia Feix, director, Feix&Merlin Architects

     
    Project data
    Location: Southwark, south London
    Start on site: February 2022
    Completion: November 2024
    Gross internal floor area: 5,000m2
    Construction cost: £18.4 million
    Form of contract: Design and build
    Construction cost per m2: £4,500
    Architect: Feix&Merlin Architects
    Client: General Projects
    Structural engineer: Heyne Tillett Steel
    M&E consultant: RED Engineering
    Quantity surveyor: Quartz
    Heritage architect: Donald Insall Associates, Heritage ArchitecturePlanning consultant: Rolfe Judd
    Landscape consultant: Town & Country Gardens
    Acoustic consultant: Sharps Redmore
    Transport consultant: Caneparo Associates
    Project manager: Quartz
    External lighting consultant: Atrium
    Specialist light feature: Barrisol
    Fit-out contractor: White Paper
    Art curation: Art Atelier
    Furniture, fixtures and equipment procurement: Hunter
    Community space operator: WTH Community Space
    Principal designer: ORSA
    CDM co-ordinator: ORSA
    Approved building inspector: Sweco Building Control
    Main contractor: Conamar
    Embodied carbon: 52 kgCO2/m2
    #rooms #elephant #feixampampmerlins #restoration #walworth
    Rooms in the Elephant: Feix&Merlin’s restoration of Walworth Town Hall
    On a sunny spring morning in south London, Walworth Square offers a freshly minted moment of respite from the clamorous main road. Around a peculiar new war memorialnew trees shiver in the breeze, while, beneath them, a man, seemingly the worse for wear, stares vacantly at his scruffy shoes. Another man with enormous shoulders emerges from a gym and begins to take selfies. Across the square, steps rise to the grand Victorian jumble of Walworth Town Hall, which hasn’t been a town hall since the mid-1960s. Now, thanks to a fire and the near-bankruptcy of local government, the building houses offices, a café and a community centre. The architect of this transformation, Feix&Merlin, has had to negotiate a problematic inheritance – alandmark, catastrophic fire damage, impecunious owners and angry locals – and knead it into shape. In this they have succeeded, but the shape that it has assumed will, through no fault of the architects, prove indigestible to some.  The kernel of the extant structure was built as a church vestry in 1865. It later became Southwark Town Hall and was variously extended. Following the council’s evacuation to Camberwell in 1965, what remained was a public library, a local museum and municipal offices. In 2013 the roof caught fire and much of the Grade II-listed building was reduced to a shell; the remainder rotted behind hoardings until 2022, when work finally commenced on its restoration. Advertisement The protracted nature of this process can ultimately be attributed to chancellor George Osborne’s austerity budget of 2010. Although Southwark had at first intended to return the building to its original uses – and held a competition on this basis in 2015, which was won by Avanti Architects – it realised, on seeing the price tag, that this would be impossible. Avanti was dismissed and a new competition was held in 2018, with a revised brief. This emphasised the long-term commercial sustainability of the building, as well as an element of cultural use, taking into consideration the needs of the local community. The winners were developer General Projects working with Feix&Merlin. Their main gambit was to turn the building over to offices. However, on consulting the public while working up their proposal, they quickly realised how upset local residents were about the loss of public ownership. As a result, a community centre was added to the programme. It was the task of the designers to square this circle: how to retain the look and feel of a public building while optimising its new private function. They returned the exterior of the protected structure to its original form, including restoring the pattern to the roof tiles, which had been lost over the years. The ground floor houses the remaining public, or publicly accessible, spaces – the lobby café and community centre. The latter can be hired free of charge by local groups. The rest of the building is now offices. These also occupy its grandest rooms: the former main stair, debating chamber, library and museum. The last two functions have been transferred to a new building across the square, where they are housed in a new ‘heritage centre’. The architects have restored the historically significant interiors, more or less, removing the institutional accretions that had latterly defaced them, such as asphalt that had been laid on top of the masonry stairs and the false ceiling that hid the skylight above it. They also exposed the boxed-in balustrades on the mezzanine of the library and restored the parquet flooring throughout. All structural interventions have been achieved using cross-laminated timber. The roof has been reconstructed using it, creating a new storey with some intriguing windowless cubby holes inside its terminal turrets. In the former debating chamber, the structure of the roof is exposed to view, a striking piece of engineering. On the ground floor, the ceiling of the space that now houses the lobby-cum-café, which had fallen in during the fire, is supported by hefty wooden arches. In some places, the architects have made looser interpretations of the original fabric. The public viewing gallery of the debating chamber has been extended to cover three sides of the room and a pattern derived from the lost balustrade has been cut into sheet steel to create a protective barrier for this new mezzanine. Certain elements, especially in the less important interiors, have been preserved as the fire left them. Where internal walls were removed, their footprint remains, breaking up the parquet, so that, as Julia Feix puts it, visitors can still read the original plan. Above a painted dado, the pitted and scorched surface of the old plaster, or the bricks exposed beneath it, have been preserved in their damaged state. Feix says this approach ‘lets the building talk about its history, rather than creating a pastiche of an era that’s long gone’.  This move has by now become an established procedure when dealing with rescue jobs, the obvious local example being Battersea Arts Centre, which Haworth Tompkins left similarly scarred following a 2015 fire. Its antecedents stretch back to Hans Döllgast’s post-war work on the Alte Pinakothek in Munich. In its more recent manifestations we could call this approach a fetishisation of decay, which raises questions as to what is being commemorated, and why. In Döllgast’s case, the answers were obvious: the Second World War, in order to prevent wilful amnesia. But in these two more recent examples, where the catastrophes in question were accidental fires, one might ask why a coat of plaster shouldn’t have been applied. Advertisement Walworth Town Hall helps to clarify the logic at work here, which is partly born of necessity. The building could not be restored to its previous condition or use, to the dismay of some locals, including the Walworth Society heritage group. The latter objected to the perceived loss of public access and was concerned that what remained could easily be revoked: for instance, if the café were unprofitable, it could be turned into more offices. They also disliked certain architectural aspects of the proposal, which they called ‘generic’: ‘neither bold and confident designs nor faithful restorations’. After protracted consultation, these concerns were taken into consideration by the architects in the restoration of the more significant rooms. Given the wrangling, it seems to me that, as in the case of Flores & Prats’ Sala Beckett in Barcelona, these patinated surfaces are intended to produce an impression of authenticity, without recourse torestoration, or ‘pastiche’, as the architects put it. It seems likely, however, that this code speaks more clearly to designers than to members of heritage groups.  But buildings are not made for heritage groups. Instead, this one is addressing two distinct publics. The community centre still opens to the Walworth Road, with its enduringly working-class character, and has already seen good use. However, the commercial part of the building has been reoriented to the new square to the north, from which it is accessed via the steps we traversed earlier. On the other side of the square rise the brick-slip-clad southern reaches of Elephant Park, the controversial development built by Lendlease on the rubble of the Heygate Estate. The Town Hall has turned its new face to these new Elephantines, the gym-dwellers who can afford to eat in the café and might choose to rent desk space here. To return to my earlier question regarding the catastrophe being commemorated by these charred walls, perhaps the answer is: the conflagration of local government, which produced this double-headed building. Tom Wilkinson is a writer, editor and teacher specialising in the history of architecture and the visual culture of modern Germany Architect’s view As architects, we often aim to deliver transformational change, but at Walworth Town Hall, transformation came through restraint. Rather than imposing a vision, we allowed the building to speak, guiding us in knowing where to intervene and where to hold back. One key move was the reinvention of the former debating chamber into a light-filled triple-height space. Historical features were carefully restored, while a new mezzanine with a CNC-cut solid steel balustrade subtly echoes the original decorative railings of the former viewing gallery. The space is now crowned with a new exposed CLT timber roof with a bespoke light feature at its centre. All new structural and architectural elements were executed in timber, speaking to the sustainability agenda, aligning with modern environmental standards and enhancing user wellbeing. Timber’s biophilic properties connect occupants with nature, supporting physical and mental health while improving air quality. Crucial to our design language was an honest celebration of the building’s history, including the fire-damaged ‘scars’ that tell its story. While a handful of spaces were traditionally restored, most were approached with a light touch. New finishes were installed only up to the lower dado level, with the rest of the wall surfaces and ceilings left as found, retaining their battle-worn character. Subtle material changes, such as microcement infills in the parquet, hint at the former wall layouts and structures. Striking a balance between restoration and contemporary intervention was essential. It has been a privilege to work on a building with such legacy and seeing the community return after more than a decade is deeply rewarding. Walworth Town Hall now honours its past while looking boldly to the future. Julia Feix, director, Feix&Merlin Architects   Client’s view We approached this project with a vision for developing a new blueprint for bringing at-risk municipal landmarks back to life. Now restored to its former glory and removed from Historic England’s Heritage at Risk register, Walworth Town Hall has been given back to a new generation with an exciting new purpose, made viable and fit for modern standards. In partnership with Southwark Council, and closely collaborating with Historic England and local community groups, we worked with Feix&Merlin to deliver a sensitive but impactful design approach. Our vision was that the building’s legacy should be revealed, rather than erased. The result strikes a balance between celebrating its inherited state and adapting it to modern use, combining elements of old and new by making sympathetic references to its beautiful 19th century architecture. Distinctly modern features, such as the use of cross-laminated timber to replace sections of the building damaged by the 2013 fire, are a reflective and contemporary interpretation of the original design. Elephant and Castle is undergoing a significant regeneration and Walworth Town Hall functions as a bridge between the area’s authentic heritage and its new future. Driven by a collaborative process, and tailor-made for small businesses to create, inspire and thrive, the reimagined Walworth Town Hall lays the groundwork for a new creative community to grow in this local destination.  Frederic Schwass, chief development officer, General Projects   Engineer’s view Heyne Tillett Steel was engaged as structural and civil engineer from competition stage to completion. It was both a challenging restoration of a listed building and an ambitious contemporary reconstruction, in exposed engineered timber, of its pre-fire form – at the same time creating better connectivity and adding floor area.   Built in various stages, the existing comprises nearly all methods of historic construction: timber, masonry, filler joist, clay pot, cast and wrought iron. The building had to be extensively investigated to understand its condition, fitness for reuse and, in some cases, capacity to extend.   Particular attention was paid to the impact of the fire and fire dousing in terms of movement, rot and corrosion. Repairs were carried out only where necessary after an extended period of drying and monitoring. The original council chamber roof was rebuilt as hybrid trussesto span the approximately 13 x 13m double-height volume below. The roof was prefabricated in just four pieces, built off the retained walls and installed in under two weeks.  A cross-laminated timbercovering creates the roof’s truncated pyramid shape. A new floor was added within the original massing of the west wing, utilising CLT slabs and a glulam ‘spine’ beam, creating unobstructed, exposed CLT ceilings across 7m bays at either side. The significant amount of retention and timber additions mean that the project scores very highly on benchmarks for embodied carbon, competitive beyond 2030 targets. Jonathan Flint, senior associate, Heyne Tillett Steel   Working detail The restoration presented a rare opportunity to reimagine a historic structure using sustainable, expressive materials. The original council chamber roof, destroyed by fire, was rebuilt as a hybrid CLT/glulam and steel ties structure, combining the aesthetic warmth of timber with the tensile strength of steel. The new roof had to clear-span approximately 13 x 13m over a double-height volume, and as the truncated pyramid structure was kept exposed, the increased volume of the space added a dramatic effect while introducing a contemporary character. Timber was selected not only for its sustainability credentials but also for its light  weight, crucial in minimising loads on the existing retained masonry. The trusses were prefabricated offsite in four large components, ensuring precision and reducing construction time and disruption on site. Once craned into position, they were set atop an existing concrete ring beam, a structural necessity installed after the fire to stabilise the perimeter walls in the absence of a roof. This ring beam now discreetly supports the new load paths. The combination of the timber structure in combination with the exposed brick and traditional plaster achieves a visually striking, materially honest reconstruction that honours the building’s historic proportions while firmly rooting it in contemporary sustainable practice. Julia Feix, director, Feix&Merlin Architects   Project data Location: Southwark, south London Start on site: February 2022 Completion: November 2024 Gross internal floor area: 5,000m2 Construction cost: £18.4 million Form of contract: Design and build Construction cost per m2: £4,500 Architect: Feix&Merlin Architects Client: General Projects Structural engineer: Heyne Tillett Steel M&E consultant: RED Engineering Quantity surveyor: Quartz Heritage architect: Donald Insall Associates, Heritage ArchitecturePlanning consultant: Rolfe Judd Landscape consultant: Town & Country Gardens Acoustic consultant: Sharps Redmore Transport consultant: Caneparo Associates Project manager: Quartz External lighting consultant: Atrium Specialist light feature: Barrisol Fit-out contractor: White Paper Art curation: Art Atelier Furniture, fixtures and equipment procurement: Hunter Community space operator: WTH Community Space Principal designer: ORSA CDM co-ordinator: ORSA Approved building inspector: Sweco Building Control Main contractor: Conamar Embodied carbon: 52 kgCO2/m2 #rooms #elephant #feixampampmerlins #restoration #walworth
    WWW.ARCHITECTSJOURNAL.CO.UK
    Rooms in the Elephant: Feix&Merlin’s restoration of Walworth Town Hall
    On a sunny spring morning in south London, Walworth Square offers a freshly minted moment of respite from the clamorous main road. Around a peculiar new war memorial (to which war? The tracksuited boy perched on a branch is not enlightening) new trees shiver in the breeze, while, beneath them, a man, seemingly the worse for wear, stares vacantly at his scruffy shoes. Another man with enormous shoulders emerges from a gym and begins to take selfies. Across the square, steps rise to the grand Victorian jumble of Walworth Town Hall, which hasn’t been a town hall since the mid-1960s. Now, thanks to a fire and the near-bankruptcy of local government, the building houses offices, a café and a community centre. The architect of this transformation, Feix&Merlin, has had to negotiate a problematic inheritance – a (minor) landmark, catastrophic fire damage, impecunious owners and angry locals – and knead it into shape. In this they have succeeded, but the shape that it has assumed will, through no fault of the architects, prove indigestible to some.  The kernel of the extant structure was built as a church vestry in 1865. It later became Southwark Town Hall and was variously extended. Following the council’s evacuation to Camberwell in 1965, what remained was a public library, a local museum and municipal offices. In 2013 the roof caught fire and much of the Grade II-listed building was reduced to a shell; the remainder rotted behind hoardings until 2022, when work finally commenced on its restoration. Advertisement The protracted nature of this process can ultimately be attributed to chancellor George Osborne’s austerity budget of 2010. Although Southwark had at first intended to return the building to its original uses – and held a competition on this basis in 2015, which was won by Avanti Architects – it realised, on seeing the price tag, that this would be impossible. Avanti was dismissed and a new competition was held in 2018, with a revised brief. This emphasised the long-term commercial sustainability of the building, as well as an element of cultural use, taking into consideration the needs of the local community. The winners were developer General Projects working with Feix&Merlin. Their main gambit was to turn the building over to offices. However, on consulting the public while working up their proposal, they quickly realised how upset local residents were about the loss of public ownership. As a result, a community centre was added to the programme. It was the task of the designers to square this circle: how to retain the look and feel of a public building while optimising its new private function. They returned the exterior of the protected structure to its original form, including restoring the pattern to the roof tiles, which had been lost over the years. The ground floor houses the remaining public, or publicly accessible, spaces – the lobby café and community centre. The latter can be hired free of charge by local groups. The rest of the building is now offices. These also occupy its grandest rooms: the former main stair, debating chamber, library and museum. The last two functions have been transferred to a new building across the square, where they are housed in a new ‘heritage centre’. The architects have restored the historically significant interiors, more or less, removing the institutional accretions that had latterly defaced them, such as asphalt that had been laid on top of the masonry stairs and the false ceiling that hid the skylight above it. They also exposed the boxed-in balustrades on the mezzanine of the library and restored the parquet flooring throughout. All structural interventions have been achieved using cross-laminated timber. The roof has been reconstructed using it, creating a new storey with some intriguing windowless cubby holes inside its terminal turrets (handy for undistracted meetings). In the former debating chamber, the structure of the roof is exposed to view, a striking piece of engineering. On the ground floor, the ceiling of the space that now houses the lobby-cum-café, which had fallen in during the fire, is supported by hefty wooden arches. In some places, the architects have made looser interpretations of the original fabric. The public viewing gallery of the debating chamber has been extended to cover three sides of the room and a pattern derived from the lost balustrade has been cut into sheet steel to create a protective barrier for this new mezzanine. Certain elements, especially in the less important interiors, have been preserved as the fire left them. Where internal walls were removed, their footprint remains, breaking up the parquet, so that, as Julia Feix puts it, visitors can still read the original plan. Above a painted dado, the pitted and scorched surface of the old plaster, or the bricks exposed beneath it, have been preserved in their damaged state. Feix says this approach ‘lets the building talk about its history, rather than creating a pastiche of an era that’s long gone’.  This move has by now become an established procedure when dealing with rescue jobs, the obvious local example being Battersea Arts Centre, which Haworth Tompkins left similarly scarred following a 2015 fire. Its antecedents stretch back to Hans Döllgast’s post-war work on the Alte Pinakothek in Munich. In its more recent manifestations we could call this approach a fetishisation of decay, which raises questions as to what is being commemorated, and why. In Döllgast’s case, the answers were obvious: the Second World War, in order to prevent wilful amnesia. But in these two more recent examples, where the catastrophes in question were accidental fires, one might ask why a coat of plaster shouldn’t have been applied. Advertisement Walworth Town Hall helps to clarify the logic at work here, which is partly born of necessity. The building could not be restored to its previous condition or use, to the dismay of some locals, including the Walworth Society heritage group. The latter objected to the perceived loss of public access and was concerned that what remained could easily be revoked: for instance, if the café were unprofitable, it could be turned into more offices. They also disliked certain architectural aspects of the proposal, which they called ‘generic’: ‘neither bold and confident designs nor faithful restorations’. After protracted consultation, these concerns were taken into consideration by the architects in the restoration of the more significant rooms. Given the wrangling, it seems to me that, as in the case of Flores & Prats’ Sala Beckett in Barcelona, these patinated surfaces are intended to produce an impression of authenticity, without recourse to (prohibitively expensive) restoration, or ‘pastiche’, as the architects put it. It seems likely, however, that this code speaks more clearly to designers than to members of heritage groups.  But buildings are not made for heritage groups. Instead, this one is addressing two distinct publics. The community centre still opens to the Walworth Road, with its enduringly working-class character, and has already seen good use. However, the commercial part of the building has been reoriented to the new square to the north, from which it is accessed via the steps we traversed earlier. On the other side of the square rise the brick-slip-clad southern reaches of Elephant Park, the controversial development built by Lendlease on the rubble of the Heygate Estate. The Town Hall has turned its new face to these new Elephantines, the gym-dwellers who can afford to eat in the café and might choose to rent desk space here (if they have to work, that is). To return to my earlier question regarding the catastrophe being commemorated by these charred walls, perhaps the answer is: the conflagration of local government, which produced this double-headed building. Tom Wilkinson is a writer, editor and teacher specialising in the history of architecture and the visual culture of modern Germany Architect’s view As architects, we often aim to deliver transformational change, but at Walworth Town Hall, transformation came through restraint. Rather than imposing a vision, we allowed the building to speak, guiding us in knowing where to intervene and where to hold back. One key move was the reinvention of the former debating chamber into a light-filled triple-height space. Historical features were carefully restored, while a new mezzanine with a CNC-cut solid steel balustrade subtly echoes the original decorative railings of the former viewing gallery. The space is now crowned with a new exposed CLT timber roof with a bespoke light feature at its centre. All new structural and architectural elements were executed in timber, speaking to the sustainability agenda, aligning with modern environmental standards and enhancing user wellbeing. Timber’s biophilic properties connect occupants with nature, supporting physical and mental health while improving air quality. Crucial to our design language was an honest celebration of the building’s history, including the fire-damaged ‘scars’ that tell its story. While a handful of spaces were traditionally restored, most were approached with a light touch. New finishes were installed only up to the lower dado level, with the rest of the wall surfaces and ceilings left as found, retaining their battle-worn character (cleaned up and made safe, of course). Subtle material changes, such as microcement infills in the parquet, hint at the former wall layouts and structures. Striking a balance between restoration and contemporary intervention was essential. It has been a privilege to work on a building with such legacy and seeing the community return after more than a decade is deeply rewarding. Walworth Town Hall now honours its past while looking boldly to the future. Julia Feix, director, Feix&Merlin Architects   Client’s view We approached this project with a vision for developing a new blueprint for bringing at-risk municipal landmarks back to life. Now restored to its former glory and removed from Historic England’s Heritage at Risk register, Walworth Town Hall has been given back to a new generation with an exciting new purpose, made viable and fit for modern standards. In partnership with Southwark Council, and closely collaborating with Historic England and local community groups, we worked with Feix&Merlin to deliver a sensitive but impactful design approach. Our vision was that the building’s legacy should be revealed, rather than erased. The result strikes a balance between celebrating its inherited state and adapting it to modern use, combining elements of old and new by making sympathetic references to its beautiful 19th century architecture. Distinctly modern features, such as the use of cross-laminated timber to replace sections of the building damaged by the 2013 fire, are a reflective and contemporary interpretation of the original design. Elephant and Castle is undergoing a significant regeneration and Walworth Town Hall functions as a bridge between the area’s authentic heritage and its new future. Driven by a collaborative process, and tailor-made for small businesses to create, inspire and thrive, the reimagined Walworth Town Hall lays the groundwork for a new creative community to grow in this local destination.  Frederic Schwass, chief development officer, General Projects   Engineer’s view Heyne Tillett Steel was engaged as structural and civil engineer from competition stage to completion. It was both a challenging restoration of a listed building and an ambitious contemporary reconstruction, in exposed engineered timber, of its pre-fire form – at the same time creating better connectivity and adding floor area.   Built in various stages, the existing comprises nearly all methods of historic construction: timber, masonry, filler joist, clay pot, cast and wrought iron. The building had to be extensively investigated to understand its condition, fitness for reuse and, in some cases, capacity to extend.   Particular attention was paid to the impact of the fire and fire dousing in terms of movement, rot and corrosion. Repairs were carried out only where necessary after an extended period of drying and monitoring. The original council chamber roof was rebuilt as hybrid trusses (glulam and steel) to span the approximately 13 x 13m double-height volume below. The roof was prefabricated in just four pieces, built off the retained walls and installed in under two weeks.  A cross-laminated timber (CLT) covering creates the roof’s truncated pyramid shape. A new floor was added within the original massing of the west wing, utilising CLT slabs and a glulam ‘spine’ beam, creating unobstructed, exposed CLT ceilings across 7m bays at either side. The significant amount of retention and timber additions mean that the project scores very highly on benchmarks for embodied carbon, competitive beyond 2030 targets. Jonathan Flint, senior associate, Heyne Tillett Steel   Working detail The restoration presented a rare opportunity to reimagine a historic structure using sustainable, expressive materials. The original council chamber roof, destroyed by fire, was rebuilt as a hybrid CLT/glulam and steel ties structure, combining the aesthetic warmth of timber with the tensile strength of steel. The new roof had to clear-span approximately 13 x 13m over a double-height volume, and as the truncated pyramid structure was kept exposed, the increased volume of the space added a dramatic effect while introducing a contemporary character. Timber was selected not only for its sustainability credentials but also for its light  weight, crucial in minimising loads on the existing retained masonry. The trusses were prefabricated offsite in four large components, ensuring precision and reducing construction time and disruption on site. Once craned into position, they were set atop an existing concrete ring beam, a structural necessity installed after the fire to stabilise the perimeter walls in the absence of a roof. This ring beam now discreetly supports the new load paths. The combination of the timber structure in combination with the exposed brick and traditional plaster achieves a visually striking, materially honest reconstruction that honours the building’s historic proportions while firmly rooting it in contemporary sustainable practice. Julia Feix, director, Feix&Merlin Architects   Project data Location: Southwark, south London Start on site: February 2022 Completion: November 2024 Gross internal floor area: 5,000m2 Construction cost: £18.4 million Form of contract: Design and build Construction cost per m2: £4,500 Architect: Feix&Merlin Architects Client: General Projects Structural engineer: Heyne Tillett Steel M&E consultant: RED Engineering Quantity surveyor: Quartz Heritage architect: Donald Insall Associates (planning), Heritage Architecture (tender) Planning consultant: Rolfe Judd Landscape consultant: Town & Country Gardens Acoustic consultant: Sharps Redmore Transport consultant: Caneparo Associates Project manager: Quartz External lighting consultant: Atrium Specialist light feature: Barrisol Fit-out contractor: White Paper Art curation: Art Atelier Furniture, fixtures and equipment procurement: Hunter Community space operator: WTH Community Space Principal designer: ORSA CDM co-ordinator: ORSA Approved building inspector: Sweco Building Control Main contractor: Conamar Embodied carbon: 52 kgCO2/m2
    0 Comments 0 Shares
  • The Security Interviews: David Faugno, 1Password

    Sergey Nivens - Stock.Adobe.com

    News

    The Security Interviews: David Faugno, 1Password
    David Faugno, co-CEO of 1Password, discusses how his background led to him joining the company and why maintaining profitability is a key factor in overcoming the challenges of switching markets.

    By

    Peter Ray Allison

    Published: 16 May 2025 12:15

    Although companies may embrace emerging technologies to remain competitive, they can be risk averse, especially when it comes to changing their customer base. However, this shift in focus is what 1Password did when it moved from being consumer focused to providing enterprise grade security solutions.
    In 2006, the company 1Password developed a password manager of the same name for the Windows, Android, iOS and Linux platforms. Since then, it has earned a reputation for being a secure method for protecting sensitive user information.
    Software licenses and other sensitive information can also be securely stored in a virtual vault on their servers, which is locked with a password-based key derivation functionguarded master password.
    David Faugno had previously been enjoying a semi-retirement, working as a board member and adviser for various companies, including 1Password. As his interest in the company grew, he soon became increasingly impressed with its collaborative approach and transparency. He was invited to join the company as its president and chief operating officer in September 2023, before becoming co-CEO just over a year later.
    Faugno had previously spent more than 10 years with security and storage provider Barracuda Networks as its chief finance officer. Faugno’s experience with Barracuda Networks gave him a broad understanding of the security landscape, as well as a unique perspective for solving security problems facing organisations of all sizes.
    When Faugno joined 1Password, the world was emerging from the Covid-19 pandemic. Covid transformed the way companies operate by accelerating remote working technologies and encouraging people to work from home and, since then, hybrid working has become the norm in many sectors.
    “The world was fundamentally changing. The way people worked and the tools that businesses had provided to their employees to stay safe and secure, and create a secure perimeter, no longer really existed,” says Faugno. “This got accelerated pretty dramatically during the pandemic, which is right at the point in the time when we first invested and got involved.”
    As a consequence of the proliferation of remote working, the security perimeter for an organisation also expanded. Previously, the security perimeter had been at the endpoints of the corporate network, but now it has extended into homes of employees.
    Most cyber security incidents are due to compromised credentials, such as stolen, weak or reused passwords. Consequently, employees who use weak identification systems at home may inadvertently expose corporate networks to attack.
    It is therefore essential for maintaining security of a corporate network security that the cyber security of devices in the employees’ home is also protected. One method for achieving this is to provide each employee with a free family license for a cyber security package.
    Balancing security and data privacy against accessibility and usability can be challenging as these aspects can often be at odds with each other. Faugno acknowledges that uncompromised security may cause friction with setup and account recovery, however, 1Password took a decision early in the product development cycle to focus on ensuring that the most secure way was also the easiest. This led to a rapid uptake of its password manager, which resulted in it being adopted into thousands of businesses.
    Faugno soon noted that although 1Password was primarily a consumer-focused product at the time, it was becoming increasingly used in the enterprise sector.
    “When the work environment started to change and people started to get access to resources that were not being necessarily centrally controlled through their SSO, or through the tools that the company had put behind the firewall, these security-centric folks in business thought, ‘Oh, I can use 1Password for this’,” says Faugno.
    “We got pulled into thousands of business environments by these people. That’s when our awakening happened – the battlefield had moved from the building walls to where the end user was, wherever they were, with whatever tools they were using.”
    One of the first things Faugno did when he joined 1Password was to hire a finance leader. By having a sales team engage with enterprise clients to understand their needs, such as administrative controls or additional reporting functionality, 1Password was able to develop its existing platform and market an enterprise service to the business community.
    “When we first made the investment in 1Password in 2019, the company had zero salespeople and pretty much zero accountants,” says Faugno. “It was nothing but developers, building a great product, and support people. Those use cases would organically come, but what we weren’t doing is interfacing with the chief information security officers at large enterprises to share how our platform fits into their overall security architecture.”
    1Password started building infrastructure around enterprise level support and billing capabilities, as well as sales and post-sales implementation capabilities, to allow it to engage with the business sector.
    Any change to a company carries with it a certain level of risk and expense, especially when it involves adapting to a changing market. It has taken four years, but 1Password’s core business model has created solid foundation for the company to build on.
    Despite the absence of salespeople and accountants, 1Password’s cash flow had remained profitable. This strong position allowed 1Password the opportunity for forward investmentwithout sacrificing profitability.
    Although maintaining durability of growth is essential for financial sustainability, it can be challenging. Unless an organisation has a financially stable core product, significant resources can be spent promoting a product that causes a sudden growth curve, but the growth will stop as soon as the money is used up if it was not sustainable.
    1Password had the opportunity to invest in itself while remaining profitable in the different sectors, ensuring a durable growth. Instead of optimising for profitability, 1Password is forward investing across several areas without the need to pay off debt from a private equity transaction.
    “Over 75% of our sales are to companies, but so many people think of us as a consumer business, because either they know us personally or they’ve seen the legacy of us over the 20 years,” says Faugno.
     The cyber security sector is a constantly evolving market, with an ongoing war of attrition between hackers and security teams: what is cutting edge now could be obsolete in six months’ time. Not only must security companies have a solid product, but they must also constantly update it in response to emerging threats.
    Soon, one of the key challenges that cyber security teams will need robust solutions for is protecting their communications in a post-quantum world. Quantum computers can process vast amounts of information in a fraction of the time that classical computers would take, including today’s supercomputers. This will have massive implications for cyber security as quantum computers will be able quickly break current encryption systems.
    There are various technologies already being developed that are described as quantum resistant, but testing of these is still ongoing. Rather than focusing on a specific technology, 1Password has teams researching emerging challenges. The future security challenge presented by quantum computing necessitate a multifaceted security strategy – 2FA/MFA, passkeys and federation.
    “We have teams that are engaged deeply in thinking about what’s not only the next step, but two steps ahead,” says Faugno. “The world is changing across a number of dimensions, and quantum computing represents one. Passkeys are going to help, but the pathway to passwordlesss is a journey that’s going to take decades.  
    “Our view is that you have to start with the visibility of everything that exists and move everything on the continuum to passwordlesss. Today, that is having strong and unique passwords and encrypted vaults, adding multi-factor authentication, using passkeys where they’re available, and ultimately moving to federation.”
    Reputation is essential, especially in security. If a tool has proven itself to be viable and effective protection against attacks in the wild, then that will over carry into the business sector and naturally generate interest from organisations.
    “If you can build that level of endearment to the end user at the individual level, then what you can do for the business user is very similar,” concludes Faugno. “You can satisfy the most robust and hard-to-crack use case for making someone feel like this tool is helping them be secure and productive.” 

    from the Security Interviews Series

    Armis CEO Yevgeny Dibrov talks about how his military service and intelligence work opened the door into the world of cyber security entrepreneurship.
    Okta regional chief security officer for EMEA sits down with Dan Raywood to talk about how Okta is pivoting to a secure-by-design champion.
    Threat intel expert and author Martin Lee, EMEA technical lead for security research at Cisco Talos, joins Computer Weekly to mark the 35th anniversary of the first ever ransomware attack.

    In The Current Issue:

    UK MoJ crime prediction algorithms raise serious concerns
    Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim

    Download Current Issue

    GraphQL as an ‘essential protocol’ for AI-API orchestration
    – CW Developer Network

    Mind the insight-to-impact gap, Qlik captures analytics ‘in the moment’
    – CW Developer Network

    View All Blogs
    #security #interviews #david #faugno #1password
    The Security Interviews: David Faugno, 1Password
    Sergey Nivens - Stock.Adobe.com News The Security Interviews: David Faugno, 1Password David Faugno, co-CEO of 1Password, discusses how his background led to him joining the company and why maintaining profitability is a key factor in overcoming the challenges of switching markets. By Peter Ray Allison Published: 16 May 2025 12:15 Although companies may embrace emerging technologies to remain competitive, they can be risk averse, especially when it comes to changing their customer base. However, this shift in focus is what 1Password did when it moved from being consumer focused to providing enterprise grade security solutions. In 2006, the company 1Password developed a password manager of the same name for the Windows, Android, iOS and Linux platforms. Since then, it has earned a reputation for being a secure method for protecting sensitive user information. Software licenses and other sensitive information can also be securely stored in a virtual vault on their servers, which is locked with a password-based key derivation functionguarded master password. David Faugno had previously been enjoying a semi-retirement, working as a board member and adviser for various companies, including 1Password. As his interest in the company grew, he soon became increasingly impressed with its collaborative approach and transparency. He was invited to join the company as its president and chief operating officer in September 2023, before becoming co-CEO just over a year later. Faugno had previously spent more than 10 years with security and storage provider Barracuda Networks as its chief finance officer. Faugno’s experience with Barracuda Networks gave him a broad understanding of the security landscape, as well as a unique perspective for solving security problems facing organisations of all sizes. When Faugno joined 1Password, the world was emerging from the Covid-19 pandemic. Covid transformed the way companies operate by accelerating remote working technologies and encouraging people to work from home and, since then, hybrid working has become the norm in many sectors. “The world was fundamentally changing. The way people worked and the tools that businesses had provided to their employees to stay safe and secure, and create a secure perimeter, no longer really existed,” says Faugno. “This got accelerated pretty dramatically during the pandemic, which is right at the point in the time when we first invested and got involved.” As a consequence of the proliferation of remote working, the security perimeter for an organisation also expanded. Previously, the security perimeter had been at the endpoints of the corporate network, but now it has extended into homes of employees. Most cyber security incidents are due to compromised credentials, such as stolen, weak or reused passwords. Consequently, employees who use weak identification systems at home may inadvertently expose corporate networks to attack. It is therefore essential for maintaining security of a corporate network security that the cyber security of devices in the employees’ home is also protected. One method for achieving this is to provide each employee with a free family license for a cyber security package. Balancing security and data privacy against accessibility and usability can be challenging as these aspects can often be at odds with each other. Faugno acknowledges that uncompromised security may cause friction with setup and account recovery, however, 1Password took a decision early in the product development cycle to focus on ensuring that the most secure way was also the easiest. This led to a rapid uptake of its password manager, which resulted in it being adopted into thousands of businesses. Faugno soon noted that although 1Password was primarily a consumer-focused product at the time, it was becoming increasingly used in the enterprise sector. “When the work environment started to change and people started to get access to resources that were not being necessarily centrally controlled through their SSO, or through the tools that the company had put behind the firewall, these security-centric folks in business thought, ‘Oh, I can use 1Password for this’,” says Faugno. “We got pulled into thousands of business environments by these people. That’s when our awakening happened – the battlefield had moved from the building walls to where the end user was, wherever they were, with whatever tools they were using.” One of the first things Faugno did when he joined 1Password was to hire a finance leader. By having a sales team engage with enterprise clients to understand their needs, such as administrative controls or additional reporting functionality, 1Password was able to develop its existing platform and market an enterprise service to the business community. “When we first made the investment in 1Password in 2019, the company had zero salespeople and pretty much zero accountants,” says Faugno. “It was nothing but developers, building a great product, and support people. Those use cases would organically come, but what we weren’t doing is interfacing with the chief information security officers at large enterprises to share how our platform fits into their overall security architecture.” 1Password started building infrastructure around enterprise level support and billing capabilities, as well as sales and post-sales implementation capabilities, to allow it to engage with the business sector. Any change to a company carries with it a certain level of risk and expense, especially when it involves adapting to a changing market. It has taken four years, but 1Password’s core business model has created solid foundation for the company to build on. Despite the absence of salespeople and accountants, 1Password’s cash flow had remained profitable. This strong position allowed 1Password the opportunity for forward investmentwithout sacrificing profitability. Although maintaining durability of growth is essential for financial sustainability, it can be challenging. Unless an organisation has a financially stable core product, significant resources can be spent promoting a product that causes a sudden growth curve, but the growth will stop as soon as the money is used up if it was not sustainable. 1Password had the opportunity to invest in itself while remaining profitable in the different sectors, ensuring a durable growth. Instead of optimising for profitability, 1Password is forward investing across several areas without the need to pay off debt from a private equity transaction. “Over 75% of our sales are to companies, but so many people think of us as a consumer business, because either they know us personally or they’ve seen the legacy of us over the 20 years,” says Faugno.  The cyber security sector is a constantly evolving market, with an ongoing war of attrition between hackers and security teams: what is cutting edge now could be obsolete in six months’ time. Not only must security companies have a solid product, but they must also constantly update it in response to emerging threats. Soon, one of the key challenges that cyber security teams will need robust solutions for is protecting their communications in a post-quantum world. Quantum computers can process vast amounts of information in a fraction of the time that classical computers would take, including today’s supercomputers. This will have massive implications for cyber security as quantum computers will be able quickly break current encryption systems. There are various technologies already being developed that are described as quantum resistant, but testing of these is still ongoing. Rather than focusing on a specific technology, 1Password has teams researching emerging challenges. The future security challenge presented by quantum computing necessitate a multifaceted security strategy – 2FA/MFA, passkeys and federation. “We have teams that are engaged deeply in thinking about what’s not only the next step, but two steps ahead,” says Faugno. “The world is changing across a number of dimensions, and quantum computing represents one. Passkeys are going to help, but the pathway to passwordlesss is a journey that’s going to take decades.   “Our view is that you have to start with the visibility of everything that exists and move everything on the continuum to passwordlesss. Today, that is having strong and unique passwords and encrypted vaults, adding multi-factor authentication, using passkeys where they’re available, and ultimately moving to federation.” Reputation is essential, especially in security. If a tool has proven itself to be viable and effective protection against attacks in the wild, then that will over carry into the business sector and naturally generate interest from organisations. “If you can build that level of endearment to the end user at the individual level, then what you can do for the business user is very similar,” concludes Faugno. “You can satisfy the most robust and hard-to-crack use case for making someone feel like this tool is helping them be secure and productive.”  from the Security Interviews Series Armis CEO Yevgeny Dibrov talks about how his military service and intelligence work opened the door into the world of cyber security entrepreneurship. Okta regional chief security officer for EMEA sits down with Dan Raywood to talk about how Okta is pivoting to a secure-by-design champion. Threat intel expert and author Martin Lee, EMEA technical lead for security research at Cisco Talos, joins Computer Weekly to mark the 35th anniversary of the first ever ransomware attack. In The Current Issue: UK MoJ crime prediction algorithms raise serious concerns Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim Download Current Issue GraphQL as an ‘essential protocol’ for AI-API orchestration – CW Developer Network Mind the insight-to-impact gap, Qlik captures analytics ‘in the moment’ – CW Developer Network View All Blogs #security #interviews #david #faugno #1password
    WWW.COMPUTERWEEKLY.COM
    The Security Interviews: David Faugno, 1Password
    Sergey Nivens - Stock.Adobe.com News The Security Interviews: David Faugno, 1Password David Faugno, co-CEO of 1Password, discusses how his background led to him joining the company and why maintaining profitability is a key factor in overcoming the challenges of switching markets. By Peter Ray Allison Published: 16 May 2025 12:15 Although companies may embrace emerging technologies to remain competitive, they can be risk averse, especially when it comes to changing their customer base. However, this shift in focus is what 1Password did when it moved from being consumer focused to providing enterprise grade security solutions. In 2006, the company 1Password developed a password manager of the same name for the Windows, Android, iOS and Linux platforms. Since then, it has earned a reputation for being a secure method for protecting sensitive user information. Software licenses and other sensitive information can also be securely stored in a virtual vault on their servers, which is locked with a password-based key derivation function (PBKDF2) guarded master password (a password storage algorithm that is designed for deterring brute force attacks by making them computationally expensive). David Faugno had previously been enjoying a semi-retirement, working as a board member and adviser for various companies, including 1Password. As his interest in the company grew, he soon became increasingly impressed with its collaborative approach and transparency. He was invited to join the company as its president and chief operating officer in September 2023, before becoming co-CEO just over a year later. Faugno had previously spent more than 10 years with security and storage provider Barracuda Networks as its chief finance officer. Faugno’s experience with Barracuda Networks gave him a broad understanding of the security landscape, as well as a unique perspective for solving security problems facing organisations of all sizes. When Faugno joined 1Password, the world was emerging from the Covid-19 pandemic. Covid transformed the way companies operate by accelerating remote working technologies and encouraging people to work from home and, since then, hybrid working has become the norm in many sectors. “The world was fundamentally changing. The way people worked and the tools that businesses had provided to their employees to stay safe and secure, and create a secure perimeter, no longer really existed,” says Faugno. “This got accelerated pretty dramatically during the pandemic, which is right at the point in the time when we first invested and got involved.” As a consequence of the proliferation of remote working, the security perimeter for an organisation also expanded. Previously, the security perimeter had been at the endpoints of the corporate network, but now it has extended into homes of employees. Most cyber security incidents are due to compromised credentials, such as stolen, weak or reused passwords. Consequently, employees who use weak identification systems at home may inadvertently expose corporate networks to attack. It is therefore essential for maintaining security of a corporate network security that the cyber security of devices in the employees’ home is also protected. One method for achieving this is to provide each employee with a free family license for a cyber security package. Balancing security and data privacy against accessibility and usability can be challenging as these aspects can often be at odds with each other. Faugno acknowledges that uncompromised security may cause friction with setup and account recovery, however, 1Password took a decision early in the product development cycle to focus on ensuring that the most secure way was also the easiest. This led to a rapid uptake of its password manager, which resulted in it being adopted into thousands of businesses. Faugno soon noted that although 1Password was primarily a consumer-focused product at the time, it was becoming increasingly used in the enterprise sector. “When the work environment started to change and people started to get access to resources that were not being necessarily centrally controlled through their SSO, or through the tools that the company had put behind the firewall, these security-centric folks in business thought, ‘Oh, I can use 1Password for this’,” says Faugno. “We got pulled into thousands of business environments by these people. That’s when our awakening happened – the battlefield had moved from the building walls to where the end user was, wherever they were, with whatever tools they were using.” One of the first things Faugno did when he joined 1Password was to hire a finance leader. By having a sales team engage with enterprise clients to understand their needs, such as administrative controls or additional reporting functionality, 1Password was able to develop its existing platform and market an enterprise service to the business community. “When we first made the investment in 1Password in 2019, the company had zero salespeople and pretty much zero accountants,” says Faugno. “It was nothing but developers, building a great product, and support people. Those use cases would organically come, but what we weren’t doing is interfacing with the chief information security officers at large enterprises to share how our platform fits into their overall security architecture.” 1Password started building infrastructure around enterprise level support and billing capabilities, as well as sales and post-sales implementation capabilities, to allow it to engage with the business sector. Any change to a company carries with it a certain level of risk and expense, especially when it involves adapting to a changing market. It has taken four years, but 1Password’s core business model has created solid foundation for the company to build on. Despite the absence of salespeople and accountants, 1Password’s cash flow had remained profitable. This strong position allowed 1Password the opportunity for forward investment (investing in a company to improve a return on investment) without sacrificing profitability. Although maintaining durability of growth is essential for financial sustainability, it can be challenging. Unless an organisation has a financially stable core product, significant resources can be spent promoting a product that causes a sudden growth curve, but the growth will stop as soon as the money is used up if it was not sustainable. 1Password had the opportunity to invest in itself while remaining profitable in the different sectors, ensuring a durable growth. Instead of optimising for profitability, 1Password is forward investing across several areas without the need to pay off debt from a private equity transaction. “Over 75% of our sales are to companies, but so many people think of us as a consumer business, because either they know us personally or they’ve seen the legacy of us over the 20 years,” says Faugno.  The cyber security sector is a constantly evolving market, with an ongoing war of attrition between hackers and security teams: what is cutting edge now could be obsolete in six months’ time. Not only must security companies have a solid product, but they must also constantly update it in response to emerging threats. Soon, one of the key challenges that cyber security teams will need robust solutions for is protecting their communications in a post-quantum world. Quantum computers can process vast amounts of information in a fraction of the time that classical computers would take, including today’s supercomputers. This will have massive implications for cyber security as quantum computers will be able quickly break current encryption systems. There are various technologies already being developed that are described as quantum resistant, but testing of these is still ongoing. Rather than focusing on a specific technology, 1Password has teams researching emerging challenges. The future security challenge presented by quantum computing necessitate a multifaceted security strategy – 2FA/MFA, passkeys and federation (authentication across networked systems). “We have teams that are engaged deeply in thinking about what’s not only the next step, but two steps ahead,” says Faugno. “The world is changing across a number of dimensions, and quantum computing represents one. Passkeys are going to help, but the pathway to passwordlesss is a journey that’s going to take decades.   “Our view is that you have to start with the visibility of everything that exists and move everything on the continuum to passwordlesss. Today, that is having strong and unique passwords and encrypted vaults, adding multi-factor authentication, using passkeys where they’re available, and ultimately moving to federation.” Reputation is essential, especially in security. If a tool has proven itself to be viable and effective protection against attacks in the wild, then that will over carry into the business sector and naturally generate interest from organisations. “If you can build that level of endearment to the end user at the individual level, then what you can do for the business user is very similar,” concludes Faugno. “You can satisfy the most robust and hard-to-crack use case for making someone feel like this tool is helping them be secure and productive.”  Read more from the Security Interviews Series Armis CEO Yevgeny Dibrov talks about how his military service and intelligence work opened the door into the world of cyber security entrepreneurship. Okta regional chief security officer for EMEA sits down with Dan Raywood to talk about how Okta is pivoting to a secure-by-design champion. Threat intel expert and author Martin Lee, EMEA technical lead for security research at Cisco Talos, joins Computer Weekly to mark the 35th anniversary of the first ever ransomware attack. In The Current Issue: UK MoJ crime prediction algorithms raise serious concerns Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim Download Current Issue GraphQL as an ‘essential protocol’ for AI-API orchestration – CW Developer Network Mind the insight-to-impact gap, Qlik captures analytics ‘in the moment’ – CW Developer Network View All Blogs
    0 Comments 0 Shares
  • How to Set the Number of Trees in Random Forest

    Scientific publication
    T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich. optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26, 95.Follow this LINK to the original publication.

    Random Forest — A Powerful Tool for Anyone Working With Data

    What is Random Forest?

    Have you ever wished you could make better decisions using data — like predicting the risk of diseases, crop yields, or spotting patterns in customer behavior? That’s where machine learning comes in and one of the most accessible and powerful tools in this field is something called Random Forest.

    So why is random forest so popular? For one, it’s incredibly flexible. It works well with many types of data whether numbers, categories, or both. It’s also widely used in many fields — from predicting patient outcomes in healthcare to detecting fraud in finance, from improving shopping experiences online to optimising agricultural practices.

    Despite the name, random forest has nothing to do with trees in a forest — but it does use something called Decision Trees to make smart predictions. You can think of a decision tree as a flowchart that guides a series of yes/no questions based on the data you give it. A random forest creates a whole bunch of these trees, each slightly different, and then combines their results to make one final decision. It’s a bit like asking a group of experts for their opinion and then going with the majority vote.

    But until recently, one question was unanswered: How many decision trees do I actually need? If each decision tree can lead to different results, averaging many trees would lead to better and more reliable results. But how many are enough? Luckily, the optRF package answers this question!

    So let’s have a look at how to optimise Random Forest for predictions and variable selection!

    Making Predictions with Random Forests

    To optimise and to use random forest for making predictions, we can use the open-source statistics programme R. Once we open R, we have to install the two R packages “ranger” which allows to use random forests in R and “optRF” to optimise random forests. Both packages are open-source and available via the official R repository CRAN. In order to install and load these packages, the following lines of R code can be run:

    > install.packages> install.packages> library> libraryNow that the packages are installed and loaded into the library, we can use the functions that these packages contain. Furthermore, we can also use the data set included in the optRF package which is free to use under the GPL license. This data set called SNPdata contains in the first column the yield of 250 wheat plants as well as 5000 genomic markersthat can contain either the value 0 or 2.

    > SNPdataYield SNP_0001 SNP_0002 SNP_0003 SNP_0004
    ID_001 670.7588 0 0 0 0
    ID_002 542.5611 0 2 0 0
    ID_003 591.6631 2 2 0 2
    ID_004 476.3727 0 0 0 0
    ID_005 635.9814 2 2 0 2

    This data set is an example for genomic data and can be used for genomic prediction which is a very important tool for breeding high-yielding crops and, thus, to fight world hunger. The idea is to predict the yield of crops using genomic markers. And exactly for this purpose, random forest can be used! That means that a random forest model is used to describe the relationship between the yield and the genomic markers. Afterwards, we can predict the yield of wheat plants where we only have genomic markers.

    Therefore, let’s imagine that we have 200 wheat plants where we know the yield and the genomic markers. This is the so-called training data set. Let’s further assume that we have 50 wheat plants where we know the genomic markers but not their yield. This is the so-called test data set. Thus, we separate the data frame SNPdata so that the first 200 rows are saved as training and the last 50 rows without their yield are saved as test data:

    > Training = SNPdata> Test = SNPdataWith these data sets, we can now have a look at how to make predictions using random forests!

    First, we got to calculate the optimal number of trees for random forest. Since we want to make predictions, we use the function opt_prediction from the optRF package. Into this function we have to insert the response from the training data set, the predictors from the training data set, and the predictors from the test data set. Before we run this function, we can use the set.seed function to ensure reproducibility even though this is not necessary:

    > set.seed> optRF_result = opt_predictionRecommended number of trees: 19000

    All the results from the opt_prediction function are now saved in the object optRF_result, however, the most important information was already printed in the console: For this data set, we should use 19,000 trees.

    With this information, we can now use random forest to make predictions. Therefore, we use the ranger function to derive a random forest model that describes the relationship between the genomic markers and the yield in the training data set. Also here, we have to insert the response in the y argument and the predictors in the x argument. Furthermore, we can set the write.forest argument to be TRUE and we can insert the optimal number of trees in the num.trees argument:

    > RF_model = rangerAnd that’s it! The object RF_model contains the random forest model that describes the relationship between the genomic markers and the yield. With this model, we can now predict the yield for the 50 plants in the test data set where we have the genomic markers but we don’t know the yield:

    > predictions = predict$predictions
    > predicted_Test = data.frame, predicted_yield = predictions)

    The data frame predicted_Test now contains the IDs of the wheat plants together with their predicted yield:

    > headID predicted_yield
    ID_201 593.6063
    ID_202 596.8615
    ID_203 591.3695
    ID_204 589.3909
    ID_205 599.5155
    ID_206 608.1031

    Variable Selection with Random Forests

    A different approach to analysing such a data set would be to find out which variables are most important to predict the response. In this case, the question would be which genomic markers are most important to predict the yield. Also this can be done with random forests!

    If we tackle such a task, we don’t need a training and a test data set. We can simply use the entire data set SNPdata and see which of the variables are the most important ones. But before we do that, we should again determine the optimal number of trees using the optRF package. Since we are insterested in calculating the variable importance, we use the function opt_importance:

    > set.seed> optRF_result = opt_importanceRecommended number of trees: 40000

    One can see that the optimal number of trees is now higher than it was for predictions. This is actually often the case. However, with this number of trees, we can now use the ranger function to calculate the importance of the variables. Therefore, we use the ranger function as before but we change the number of trees in the num.trees argument to 40,000 and we set the importance argument to “permutation”. 

    > set.seed> RF_model = ranger> D_VI = data.frame,
    + importance = RF_model$variable.importance)
    > D_VI = D_VIThe data frame D_VI now contains all the variables, thus, all the genomic markers, and next to it, their importance. Also, we have directly ordered this data frame so that the most important markers are on the top and the least important markers are at the bottom of this data frame. Which means that we can have a look at the most important variables using the head function:

    > headvariable importance
    SNP_0020 45.75302
    SNP_0004 38.65594
    SNP_0019 36.81254
    SNP_0050 34.56292
    SNP_0033 30.47347
    SNP_0043 28.54312

    And that’s it! We have used random forest to make predictions and to estimate the most important variables in a data set. Furthermore, we have optimised random forest using the optRF package!

    Why Do We Need Optimisation?

    Now that we’ve seen how easy it is to use random forest and how quickly it can be optimised, it’s time to take a closer look at what’s happening behind the scenes. Specifically, we’ll explore how random forest works and why the results might change from one run to another.

    To do this, we’ll use random forest to calculate the importance of each genomic marker but instead of optimising the number of trees beforehand, we’ll stick with the default settings in the ranger function. By default, ranger uses 500 decision trees. Let’s try it out:

    > set.seed> RF_model = ranger> D_VI = data.frame,
    + importance = RF_model$variable.importance)
    > D_VI = D_VI> headvariable importance
    SNP_0020 80.22909
    SNP_0019 60.37387
    SNP_0043 50.52367
    SNP_0005 43.47999
    SNP_0034 38.52494
    SNP_0015 34.88654

    As expected, everything runs smoothly — and quickly! In fact, this run was significantly faster than when we previously used 40,000 trees. But what happens if we run the exact same code again but this time with a different seed?

    > set.seed> RF_model2 = ranger> D_VI2 = data.frame,
    + importance = RF_model2$variable.importance)
    > D_VI2 = D_VI2> headvariable importance
    SNP_0050 60.64051
    SNP_0043 58.59175
    SNP_0033 52.15701
    SNP_0020 51.10561
    SNP_0015 34.86162
    SNP_0019 34.21317

    Once again, everything appears to work fine but take a closer look at the results. In the first run, SNP_0020 had the highest importance score at 80.23, but in the second run, SNP_0050 takes the top spot and SNP_0020 drops to the fourth place with a much lower importance score of 51.11. That’s a significant shift! So what changed?

    The answer lies in something called non-determinism. Random forest, as the name suggests, involves a lot of randomness: it randomly selects data samples and subsets of variables at various points during training. This randomness helps prevent overfitting but it also means that results can vary slightly each time you run the algorithm — even with the exact same data set. That’s where the set.seedfunction comes in. It acts like a bookmark in a shuffled deck of cards. By setting the same seed, you ensure that the random choices made by the algorithm follow the same sequence every time you run the code. But when you change the seed, you’re effectively changing the random path the algorithm follows. That’s why, in our example, the most important genomic markers came out differently in each run. This behavior — where the same process can yield different results due to internal randomness — is a classic example of non-determinism in machine learning.

    Taming the Randomness in Random Forests

    As we just saw, random forest models can produce slightly different results every time you run them even when using the same data due to the algorithm’s built-in randomness. So, how can we reduce this randomness and make our results more stable?

    One of the simplest and most effective ways is to increase the number of trees. Each tree in a random forest is trained on a random subset of the data and variables, so the more trees we add, the better the model can “average out” the noise caused by individual trees. Think of it like asking 10 people for their opinion versus asking 1,000 — you’re more likely to get a reliable answer from the larger group.

    With more trees, the model’s predictions and variable importance rankings tend to become more stable and reproducible even without setting a specific seed. In other words, adding more trees helps to tame the randomness. However, there’s a catch. More trees also mean more computation time. Training a random forest with 500 trees might take a few seconds but training one with 40,000 trees could take several minutes or more, depending on the size of your data set and your computer’s performance.

    However, the relationship between the stability and the computation time of random forest is non-linear. While going from 500 to 1,000 trees can significantly improve stability, going from 5,000 to 10,000 trees might only provide a tiny improvement in stability while doubling the computation time. At some point, you hit a plateau where adding more trees gives diminishing returns — you pay more in computation time but gain very little in stability. That’s why it’s essential to find the right balance: Enough trees to ensure stable results but not so many that your analysis becomes unnecessarily slow.

    And this is exactly what the optRF package does: it analyses the relationship between the stability and the number of trees in random forests and uses this relationship to determine the optimal number of trees that leads to stable results and beyond which adding more trees would unnecessarily increase the computation time.

    Above, we have already used the opt_importance function and saved the results as optRF_result. This object contains the information about the optimal number of trees but it also contains information about the relationship between the stability and the number of trees. Using the plot_stability function, we can visualise this relationship. Therefore, we have to insert the name of the optRF object, which measure we are interested in, the interval we want to visualise on the X axis, and if the recommended number of trees should be added:

    > plot_stabilityThe output of the plot_stability function visualises the stability of random forest depending on the number of decision trees

    This plot clearly shows the non-linear relationship between stability and the number of trees. With 500 trees, random forest only leads to a stability of around 0.2 which explains why the results changed drastically when repeating random forest after setting a different seed. With the recommended 40,000 trees, however, the stability is near 1. Adding more than 40,000 trees would get the stability further to 1 but this increase would be only very small while the computation time would further increase. That is why 40,000 trees indicate the optimal number of trees for this data set.

    The Takeaway: Optimise Random Forest to Get the Most of It

    Random forest is a powerful ally for anyone working with data — whether you’re a researcher, analyst, student, or data scientist. It’s easy to use, remarkably flexible, and highly effective across a wide range of applications. But like any tool, using it well means understanding what’s happening under the hood. In this post, we’ve uncovered one of its hidden quirks: The randomness that makes it strong can also make it unstable if not carefully managed. Fortunately, with the optRF package, we can strike the perfect balance between stability and performance, ensuring we get reliable results without wasting computational resources. Whether you’re working in genomics, medicine, economics, agriculture, or any other data-rich field, mastering this balance will help you make smarter, more confident decisions based on your data.
    The post How to Set the Number of Trees in Random Forest appeared first on Towards Data Science.
    #how #set #number #trees #random
    How to Set the Number of Trees in Random Forest
    Scientific publication T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich. optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26, 95.Follow this LINK to the original publication. Random Forest — A Powerful Tool for Anyone Working With Data What is Random Forest? Have you ever wished you could make better decisions using data — like predicting the risk of diseases, crop yields, or spotting patterns in customer behavior? That’s where machine learning comes in and one of the most accessible and powerful tools in this field is something called Random Forest. So why is random forest so popular? For one, it’s incredibly flexible. It works well with many types of data whether numbers, categories, or both. It’s also widely used in many fields — from predicting patient outcomes in healthcare to detecting fraud in finance, from improving shopping experiences online to optimising agricultural practices. Despite the name, random forest has nothing to do with trees in a forest — but it does use something called Decision Trees to make smart predictions. You can think of a decision tree as a flowchart that guides a series of yes/no questions based on the data you give it. A random forest creates a whole bunch of these trees, each slightly different, and then combines their results to make one final decision. It’s a bit like asking a group of experts for their opinion and then going with the majority vote. But until recently, one question was unanswered: How many decision trees do I actually need? If each decision tree can lead to different results, averaging many trees would lead to better and more reliable results. But how many are enough? Luckily, the optRF package answers this question! So let’s have a look at how to optimise Random Forest for predictions and variable selection! Making Predictions with Random Forests To optimise and to use random forest for making predictions, we can use the open-source statistics programme R. Once we open R, we have to install the two R packages “ranger” which allows to use random forests in R and “optRF” to optimise random forests. Both packages are open-source and available via the official R repository CRAN. In order to install and load these packages, the following lines of R code can be run: > install.packages> install.packages> library> libraryNow that the packages are installed and loaded into the library, we can use the functions that these packages contain. Furthermore, we can also use the data set included in the optRF package which is free to use under the GPL license. This data set called SNPdata contains in the first column the yield of 250 wheat plants as well as 5000 genomic markersthat can contain either the value 0 or 2. > SNPdataYield SNP_0001 SNP_0002 SNP_0003 SNP_0004 ID_001 670.7588 0 0 0 0 ID_002 542.5611 0 2 0 0 ID_003 591.6631 2 2 0 2 ID_004 476.3727 0 0 0 0 ID_005 635.9814 2 2 0 2 This data set is an example for genomic data and can be used for genomic prediction which is a very important tool for breeding high-yielding crops and, thus, to fight world hunger. The idea is to predict the yield of crops using genomic markers. And exactly for this purpose, random forest can be used! That means that a random forest model is used to describe the relationship between the yield and the genomic markers. Afterwards, we can predict the yield of wheat plants where we only have genomic markers. Therefore, let’s imagine that we have 200 wheat plants where we know the yield and the genomic markers. This is the so-called training data set. Let’s further assume that we have 50 wheat plants where we know the genomic markers but not their yield. This is the so-called test data set. Thus, we separate the data frame SNPdata so that the first 200 rows are saved as training and the last 50 rows without their yield are saved as test data: > Training = SNPdata> Test = SNPdataWith these data sets, we can now have a look at how to make predictions using random forests! First, we got to calculate the optimal number of trees for random forest. Since we want to make predictions, we use the function opt_prediction from the optRF package. Into this function we have to insert the response from the training data set, the predictors from the training data set, and the predictors from the test data set. Before we run this function, we can use the set.seed function to ensure reproducibility even though this is not necessary: > set.seed> optRF_result = opt_predictionRecommended number of trees: 19000 All the results from the opt_prediction function are now saved in the object optRF_result, however, the most important information was already printed in the console: For this data set, we should use 19,000 trees. With this information, we can now use random forest to make predictions. Therefore, we use the ranger function to derive a random forest model that describes the relationship between the genomic markers and the yield in the training data set. Also here, we have to insert the response in the y argument and the predictors in the x argument. Furthermore, we can set the write.forest argument to be TRUE and we can insert the optimal number of trees in the num.trees argument: > RF_model = rangerAnd that’s it! The object RF_model contains the random forest model that describes the relationship between the genomic markers and the yield. With this model, we can now predict the yield for the 50 plants in the test data set where we have the genomic markers but we don’t know the yield: > predictions = predict$predictions > predicted_Test = data.frame, predicted_yield = predictions) The data frame predicted_Test now contains the IDs of the wheat plants together with their predicted yield: > headID predicted_yield ID_201 593.6063 ID_202 596.8615 ID_203 591.3695 ID_204 589.3909 ID_205 599.5155 ID_206 608.1031 Variable Selection with Random Forests A different approach to analysing such a data set would be to find out which variables are most important to predict the response. In this case, the question would be which genomic markers are most important to predict the yield. Also this can be done with random forests! If we tackle such a task, we don’t need a training and a test data set. We can simply use the entire data set SNPdata and see which of the variables are the most important ones. But before we do that, we should again determine the optimal number of trees using the optRF package. Since we are insterested in calculating the variable importance, we use the function opt_importance: > set.seed> optRF_result = opt_importanceRecommended number of trees: 40000 One can see that the optimal number of trees is now higher than it was for predictions. This is actually often the case. However, with this number of trees, we can now use the ranger function to calculate the importance of the variables. Therefore, we use the ranger function as before but we change the number of trees in the num.trees argument to 40,000 and we set the importance argument to “permutation”.  > set.seed> RF_model = ranger> D_VI = data.frame, + importance = RF_model$variable.importance) > D_VI = D_VIThe data frame D_VI now contains all the variables, thus, all the genomic markers, and next to it, their importance. Also, we have directly ordered this data frame so that the most important markers are on the top and the least important markers are at the bottom of this data frame. Which means that we can have a look at the most important variables using the head function: > headvariable importance SNP_0020 45.75302 SNP_0004 38.65594 SNP_0019 36.81254 SNP_0050 34.56292 SNP_0033 30.47347 SNP_0043 28.54312 And that’s it! We have used random forest to make predictions and to estimate the most important variables in a data set. Furthermore, we have optimised random forest using the optRF package! Why Do We Need Optimisation? Now that we’ve seen how easy it is to use random forest and how quickly it can be optimised, it’s time to take a closer look at what’s happening behind the scenes. Specifically, we’ll explore how random forest works and why the results might change from one run to another. To do this, we’ll use random forest to calculate the importance of each genomic marker but instead of optimising the number of trees beforehand, we’ll stick with the default settings in the ranger function. By default, ranger uses 500 decision trees. Let’s try it out: > set.seed> RF_model = ranger> D_VI = data.frame, + importance = RF_model$variable.importance) > D_VI = D_VI> headvariable importance SNP_0020 80.22909 SNP_0019 60.37387 SNP_0043 50.52367 SNP_0005 43.47999 SNP_0034 38.52494 SNP_0015 34.88654 As expected, everything runs smoothly — and quickly! In fact, this run was significantly faster than when we previously used 40,000 trees. But what happens if we run the exact same code again but this time with a different seed? > set.seed> RF_model2 = ranger> D_VI2 = data.frame, + importance = RF_model2$variable.importance) > D_VI2 = D_VI2> headvariable importance SNP_0050 60.64051 SNP_0043 58.59175 SNP_0033 52.15701 SNP_0020 51.10561 SNP_0015 34.86162 SNP_0019 34.21317 Once again, everything appears to work fine but take a closer look at the results. In the first run, SNP_0020 had the highest importance score at 80.23, but in the second run, SNP_0050 takes the top spot and SNP_0020 drops to the fourth place with a much lower importance score of 51.11. That’s a significant shift! So what changed? The answer lies in something called non-determinism. Random forest, as the name suggests, involves a lot of randomness: it randomly selects data samples and subsets of variables at various points during training. This randomness helps prevent overfitting but it also means that results can vary slightly each time you run the algorithm — even with the exact same data set. That’s where the set.seedfunction comes in. It acts like a bookmark in a shuffled deck of cards. By setting the same seed, you ensure that the random choices made by the algorithm follow the same sequence every time you run the code. But when you change the seed, you’re effectively changing the random path the algorithm follows. That’s why, in our example, the most important genomic markers came out differently in each run. This behavior — where the same process can yield different results due to internal randomness — is a classic example of non-determinism in machine learning. Taming the Randomness in Random Forests As we just saw, random forest models can produce slightly different results every time you run them even when using the same data due to the algorithm’s built-in randomness. So, how can we reduce this randomness and make our results more stable? One of the simplest and most effective ways is to increase the number of trees. Each tree in a random forest is trained on a random subset of the data and variables, so the more trees we add, the better the model can “average out” the noise caused by individual trees. Think of it like asking 10 people for their opinion versus asking 1,000 — you’re more likely to get a reliable answer from the larger group. With more trees, the model’s predictions and variable importance rankings tend to become more stable and reproducible even without setting a specific seed. In other words, adding more trees helps to tame the randomness. However, there’s a catch. More trees also mean more computation time. Training a random forest with 500 trees might take a few seconds but training one with 40,000 trees could take several minutes or more, depending on the size of your data set and your computer’s performance. However, the relationship between the stability and the computation time of random forest is non-linear. While going from 500 to 1,000 trees can significantly improve stability, going from 5,000 to 10,000 trees might only provide a tiny improvement in stability while doubling the computation time. At some point, you hit a plateau where adding more trees gives diminishing returns — you pay more in computation time but gain very little in stability. That’s why it’s essential to find the right balance: Enough trees to ensure stable results but not so many that your analysis becomes unnecessarily slow. And this is exactly what the optRF package does: it analyses the relationship between the stability and the number of trees in random forests and uses this relationship to determine the optimal number of trees that leads to stable results and beyond which adding more trees would unnecessarily increase the computation time. Above, we have already used the opt_importance function and saved the results as optRF_result. This object contains the information about the optimal number of trees but it also contains information about the relationship between the stability and the number of trees. Using the plot_stability function, we can visualise this relationship. Therefore, we have to insert the name of the optRF object, which measure we are interested in, the interval we want to visualise on the X axis, and if the recommended number of trees should be added: > plot_stabilityThe output of the plot_stability function visualises the stability of random forest depending on the number of decision trees This plot clearly shows the non-linear relationship between stability and the number of trees. With 500 trees, random forest only leads to a stability of around 0.2 which explains why the results changed drastically when repeating random forest after setting a different seed. With the recommended 40,000 trees, however, the stability is near 1. Adding more than 40,000 trees would get the stability further to 1 but this increase would be only very small while the computation time would further increase. That is why 40,000 trees indicate the optimal number of trees for this data set. The Takeaway: Optimise Random Forest to Get the Most of It Random forest is a powerful ally for anyone working with data — whether you’re a researcher, analyst, student, or data scientist. It’s easy to use, remarkably flexible, and highly effective across a wide range of applications. But like any tool, using it well means understanding what’s happening under the hood. In this post, we’ve uncovered one of its hidden quirks: The randomness that makes it strong can also make it unstable if not carefully managed. Fortunately, with the optRF package, we can strike the perfect balance between stability and performance, ensuring we get reliable results without wasting computational resources. Whether you’re working in genomics, medicine, economics, agriculture, or any other data-rich field, mastering this balance will help you make smarter, more confident decisions based on your data. The post How to Set the Number of Trees in Random Forest appeared first on Towards Data Science. #how #set #number #trees #random
    TOWARDSDATASCIENCE.COM
    How to Set the Number of Trees in Random Forest
    Scientific publication T. M. Lange, M. Gültas, A. O. Schmitt & F. Heinrich (2025). optRF: Optimising random forest stability by determining the optimal number of trees. BMC bioinformatics, 26(1), 95.Follow this LINK to the original publication. Random Forest — A Powerful Tool for Anyone Working With Data What is Random Forest? Have you ever wished you could make better decisions using data — like predicting the risk of diseases, crop yields, or spotting patterns in customer behavior? That’s where machine learning comes in and one of the most accessible and powerful tools in this field is something called Random Forest. So why is random forest so popular? For one, it’s incredibly flexible. It works well with many types of data whether numbers, categories, or both. It’s also widely used in many fields — from predicting patient outcomes in healthcare to detecting fraud in finance, from improving shopping experiences online to optimising agricultural practices. Despite the name, random forest has nothing to do with trees in a forest — but it does use something called Decision Trees to make smart predictions. You can think of a decision tree as a flowchart that guides a series of yes/no questions based on the data you give it. A random forest creates a whole bunch of these trees (hence the “forest”), each slightly different, and then combines their results to make one final decision. It’s a bit like asking a group of experts for their opinion and then going with the majority vote. But until recently, one question was unanswered: How many decision trees do I actually need? If each decision tree can lead to different results, averaging many trees would lead to better and more reliable results. But how many are enough? Luckily, the optRF package answers this question! So let’s have a look at how to optimise Random Forest for predictions and variable selection! Making Predictions with Random Forests To optimise and to use random forest for making predictions, we can use the open-source statistics programme R. Once we open R, we have to install the two R packages “ranger” which allows to use random forests in R and “optRF” to optimise random forests. Both packages are open-source and available via the official R repository CRAN. In order to install and load these packages, the following lines of R code can be run: > install.packages(“ranger”) > install.packages(“optRF”) > library(ranger) > library(optRF) Now that the packages are installed and loaded into the library, we can use the functions that these packages contain. Furthermore, we can also use the data set included in the optRF package which is free to use under the GPL license (just as the optRF package itself). This data set called SNPdata contains in the first column the yield of 250 wheat plants as well as 5000 genomic markers (so called single nucleotide polymorphisms or SNPs) that can contain either the value 0 or 2. > SNPdata[1:5,1:5] Yield SNP_0001 SNP_0002 SNP_0003 SNP_0004 ID_001 670.7588 0 0 0 0 ID_002 542.5611 0 2 0 0 ID_003 591.6631 2 2 0 2 ID_004 476.3727 0 0 0 0 ID_005 635.9814 2 2 0 2 This data set is an example for genomic data and can be used for genomic prediction which is a very important tool for breeding high-yielding crops and, thus, to fight world hunger. The idea is to predict the yield of crops using genomic markers. And exactly for this purpose, random forest can be used! That means that a random forest model is used to describe the relationship between the yield and the genomic markers. Afterwards, we can predict the yield of wheat plants where we only have genomic markers. Therefore, let’s imagine that we have 200 wheat plants where we know the yield and the genomic markers. This is the so-called training data set. Let’s further assume that we have 50 wheat plants where we know the genomic markers but not their yield. This is the so-called test data set. Thus, we separate the data frame SNPdata so that the first 200 rows are saved as training and the last 50 rows without their yield are saved as test data: > Training = SNPdata[1:200,] > Test = SNPdata[201:250,-1] With these data sets, we can now have a look at how to make predictions using random forests! First, we got to calculate the optimal number of trees for random forest. Since we want to make predictions, we use the function opt_prediction from the optRF package. Into this function we have to insert the response from the training data set (in this case the yield), the predictors from the training data set (in this case the genomic markers), and the predictors from the test data set. Before we run this function, we can use the set.seed function to ensure reproducibility even though this is not necessary (we will see later why reproducibility is an issue here): > set.seed(123) > optRF_result = opt_prediction(y = Training[,1], + X = Training[,-1], + X_Test = Test) Recommended number of trees: 19000 All the results from the opt_prediction function are now saved in the object optRF_result, however, the most important information was already printed in the console: For this data set, we should use 19,000 trees. With this information, we can now use random forest to make predictions. Therefore, we use the ranger function to derive a random forest model that describes the relationship between the genomic markers and the yield in the training data set. Also here, we have to insert the response in the y argument and the predictors in the x argument. Furthermore, we can set the write.forest argument to be TRUE and we can insert the optimal number of trees in the num.trees argument: > RF_model = ranger(y = Training[,1], x = Training[,-1], + write.forest = TRUE, num.trees = 19000) And that’s it! The object RF_model contains the random forest model that describes the relationship between the genomic markers and the yield. With this model, we can now predict the yield for the 50 plants in the test data set where we have the genomic markers but we don’t know the yield: > predictions = predict(RF_model, data=Test)$predictions > predicted_Test = data.frame(ID = row.names(Test), predicted_yield = predictions) The data frame predicted_Test now contains the IDs of the wheat plants together with their predicted yield: > head(predicted_Test) ID predicted_yield ID_201 593.6063 ID_202 596.8615 ID_203 591.3695 ID_204 589.3909 ID_205 599.5155 ID_206 608.1031 Variable Selection with Random Forests A different approach to analysing such a data set would be to find out which variables are most important to predict the response. In this case, the question would be which genomic markers are most important to predict the yield. Also this can be done with random forests! If we tackle such a task, we don’t need a training and a test data set. We can simply use the entire data set SNPdata and see which of the variables are the most important ones. But before we do that, we should again determine the optimal number of trees using the optRF package. Since we are insterested in calculating the variable importance, we use the function opt_importance: > set.seed(123) > optRF_result = opt_importance(y=SNPdata[,1], + X=SNPdata[,-1]) Recommended number of trees: 40000 One can see that the optimal number of trees is now higher than it was for predictions. This is actually often the case. However, with this number of trees, we can now use the ranger function to calculate the importance of the variables. Therefore, we use the ranger function as before but we change the number of trees in the num.trees argument to 40,000 and we set the importance argument to “permutation” (other options are “impurity” and “impurity_corrected”).  > set.seed(123) > RF_model = ranger(y=SNPdata[,1], x=SNPdata[,-1], + write.forest = TRUE, num.trees = 40000, + importance="permutation") > D_VI = data.frame(variable = names(SNPdata)[-1], + importance = RF_model$variable.importance) > D_VI = D_VI[order(D_VI$importance, decreasing=TRUE),] The data frame D_VI now contains all the variables, thus, all the genomic markers, and next to it, their importance. Also, we have directly ordered this data frame so that the most important markers are on the top and the least important markers are at the bottom of this data frame. Which means that we can have a look at the most important variables using the head function: > head(D_VI) variable importance SNP_0020 45.75302 SNP_0004 38.65594 SNP_0019 36.81254 SNP_0050 34.56292 SNP_0033 30.47347 SNP_0043 28.54312 And that’s it! We have used random forest to make predictions and to estimate the most important variables in a data set. Furthermore, we have optimised random forest using the optRF package! Why Do We Need Optimisation? Now that we’ve seen how easy it is to use random forest and how quickly it can be optimised, it’s time to take a closer look at what’s happening behind the scenes. Specifically, we’ll explore how random forest works and why the results might change from one run to another. To do this, we’ll use random forest to calculate the importance of each genomic marker but instead of optimising the number of trees beforehand, we’ll stick with the default settings in the ranger function. By default, ranger uses 500 decision trees. Let’s try it out: > set.seed(123) > RF_model = ranger(y=SNPdata[,1], x=SNPdata[,-1], + write.forest = TRUE, importance="permutation") > D_VI = data.frame(variable = names(SNPdata)[-1], + importance = RF_model$variable.importance) > D_VI = D_VI[order(D_VI$importance, decreasing=TRUE),] > head(D_VI) variable importance SNP_0020 80.22909 SNP_0019 60.37387 SNP_0043 50.52367 SNP_0005 43.47999 SNP_0034 38.52494 SNP_0015 34.88654 As expected, everything runs smoothly — and quickly! In fact, this run was significantly faster than when we previously used 40,000 trees. But what happens if we run the exact same code again but this time with a different seed? > set.seed(321) > RF_model2 = ranger(y=SNPdata[,1], x=SNPdata[,-1], + write.forest = TRUE, importance="permutation") > D_VI2 = data.frame(variable = names(SNPdata)[-1], + importance = RF_model2$variable.importance) > D_VI2 = D_VI2[order(D_VI2$importance, decreasing=TRUE),] > head(D_VI2) variable importance SNP_0050 60.64051 SNP_0043 58.59175 SNP_0033 52.15701 SNP_0020 51.10561 SNP_0015 34.86162 SNP_0019 34.21317 Once again, everything appears to work fine but take a closer look at the results. In the first run, SNP_0020 had the highest importance score at 80.23, but in the second run, SNP_0050 takes the top spot and SNP_0020 drops to the fourth place with a much lower importance score of 51.11. That’s a significant shift! So what changed? The answer lies in something called non-determinism. Random forest, as the name suggests, involves a lot of randomness: it randomly selects data samples and subsets of variables at various points during training. This randomness helps prevent overfitting but it also means that results can vary slightly each time you run the algorithm — even with the exact same data set. That’s where the set.seed() function comes in. It acts like a bookmark in a shuffled deck of cards. By setting the same seed, you ensure that the random choices made by the algorithm follow the same sequence every time you run the code. But when you change the seed, you’re effectively changing the random path the algorithm follows. That’s why, in our example, the most important genomic markers came out differently in each run. This behavior — where the same process can yield different results due to internal randomness — is a classic example of non-determinism in machine learning. Taming the Randomness in Random Forests As we just saw, random forest models can produce slightly different results every time you run them even when using the same data due to the algorithm’s built-in randomness. So, how can we reduce this randomness and make our results more stable? One of the simplest and most effective ways is to increase the number of trees. Each tree in a random forest is trained on a random subset of the data and variables, so the more trees we add, the better the model can “average out” the noise caused by individual trees. Think of it like asking 10 people for their opinion versus asking 1,000 — you’re more likely to get a reliable answer from the larger group. With more trees, the model’s predictions and variable importance rankings tend to become more stable and reproducible even without setting a specific seed. In other words, adding more trees helps to tame the randomness. However, there’s a catch. More trees also mean more computation time. Training a random forest with 500 trees might take a few seconds but training one with 40,000 trees could take several minutes or more, depending on the size of your data set and your computer’s performance. However, the relationship between the stability and the computation time of random forest is non-linear. While going from 500 to 1,000 trees can significantly improve stability, going from 5,000 to 10,000 trees might only provide a tiny improvement in stability while doubling the computation time. At some point, you hit a plateau where adding more trees gives diminishing returns — you pay more in computation time but gain very little in stability. That’s why it’s essential to find the right balance: Enough trees to ensure stable results but not so many that your analysis becomes unnecessarily slow. And this is exactly what the optRF package does: it analyses the relationship between the stability and the number of trees in random forests and uses this relationship to determine the optimal number of trees that leads to stable results and beyond which adding more trees would unnecessarily increase the computation time. Above, we have already used the opt_importance function and saved the results as optRF_result. This object contains the information about the optimal number of trees but it also contains information about the relationship between the stability and the number of trees. Using the plot_stability function, we can visualise this relationship. Therefore, we have to insert the name of the optRF object, which measure we are interested in (here, we are interested in the “importance”), the interval we want to visualise on the X axis, and if the recommended number of trees should be added: > plot_stability(optRF_result, measure="importance", + from=0, to=50000, add_recommendation=FALSE) The output of the plot_stability function visualises the stability of random forest depending on the number of decision trees This plot clearly shows the non-linear relationship between stability and the number of trees. With 500 trees, random forest only leads to a stability of around 0.2 which explains why the results changed drastically when repeating random forest after setting a different seed. With the recommended 40,000 trees, however, the stability is near 1 (which indicates a perfect stability). Adding more than 40,000 trees would get the stability further to 1 but this increase would be only very small while the computation time would further increase. That is why 40,000 trees indicate the optimal number of trees for this data set. The Takeaway: Optimise Random Forest to Get the Most of It Random forest is a powerful ally for anyone working with data — whether you’re a researcher, analyst, student, or data scientist. It’s easy to use, remarkably flexible, and highly effective across a wide range of applications. But like any tool, using it well means understanding what’s happening under the hood. In this post, we’ve uncovered one of its hidden quirks: The randomness that makes it strong can also make it unstable if not carefully managed. Fortunately, with the optRF package, we can strike the perfect balance between stability and performance, ensuring we get reliable results without wasting computational resources. Whether you’re working in genomics, medicine, economics, agriculture, or any other data-rich field, mastering this balance will help you make smarter, more confident decisions based on your data. The post How to Set the Number of Trees in Random Forest appeared first on Towards Data Science.
    0 Comments 0 Shares