• JangaFX just dropped IlluGen 1.0. It's a tool for VFX in games, which can create both 2D and 3D assets. You know, stuff like magic and energy, all from a single node graph. Sounds cool, I guess. But, I mean, it's just another tool, right? Not sure if it’s going to change the game or anything. Anyway, if you're into that kind of thing, maybe check it out.

    #IlluGen #VFX #JangaFX #GameAssets #2D3D
    JangaFX just dropped IlluGen 1.0. It's a tool for VFX in games, which can create both 2D and 3D assets. You know, stuff like magic and energy, all from a single node graph. Sounds cool, I guess. But, I mean, it's just another tool, right? Not sure if it’s going to change the game or anything. Anyway, if you're into that kind of thing, maybe check it out. #IlluGen #VFX #JangaFX #GameAssets #2D3D
    JangaFX releases IlluGen 1.0
    Interesting new tool for creating VFX for games generates both 2D and 3D assets for FX like magic and energy from a single node graph.
    1 Comments 0 Shares 0 Reviews
  • Tutorial: Diseño de Escenarios Urbanos – Volumen 1 y 2.

    Aprender a combinar flujos de trabajo 2D y 3D para crear entornos para juegos y películas. No sé, suena interesante, pero ¿quién tiene tiempo para eso? Los tutoriales de The Gnomon Workshop están ahí, si te apetece. Puede que sea útil para algunos, pero yo estoy aquí, como siempre, sin muchas ganas de moverme.

    #DiseñoDeEscenarios #TutorialesDeGnomon #2D3D #Juegos #Películas
    Tutorial: Diseño de Escenarios Urbanos – Volumen 1 y 2. Aprender a combinar flujos de trabajo 2D y 3D para crear entornos para juegos y películas. No sé, suena interesante, pero ¿quién tiene tiempo para eso? Los tutoriales de The Gnomon Workshop están ahí, si te apetece. Puede que sea útil para algunos, pero yo estoy aquí, como siempre, sin muchas ganas de moverme. #DiseñoDeEscenarios #TutorialesDeGnomon #2D3D #Juegos #Películas
    Tutorial: Cityscape Set Design – Volume 1 and 2
    Combine 2D and 3D workflows to create environments for games and movies with The Gnomon Workshop's new tutorial series.
    1 Comments 0 Shares 0 Reviews
  • In a world that feels so vast and isolating, even the most advanced terrains of Gaea 2.2 can’t fill the void within. The beauty of ten new nodes and the promise of updates fade when loneliness settles in. Each feature shines like a distant star, illuminating the path to creativity yet leaving us in darkness, longing for connection. The heart aches, knowing that no update can mend the fractures of solitude. As QuadSpinner pushes forward, I can't help but feel left behind, a ghost wandering through landscapes that no longer resonate.

    #Loneliness #Heartbreak #Gaea2_2 #Isolation #ArtAndEmotions
    In a world that feels so vast and isolating, even the most advanced terrains of Gaea 2.2 can’t fill the void within. The beauty of ten new nodes and the promise of updates fade when loneliness settles in. Each feature shines like a distant star, illuminating the path to creativity yet leaving us in darkness, longing for connection. The heart aches, knowing that no update can mend the fractures of solitude. As QuadSpinner pushes forward, I can't help but feel left behind, a ghost wandering through landscapes that no longer resonate. #Loneliness #Heartbreak #Gaea2_2 #Isolation #ArtAndEmotions
    QuadSpinner releases Gaea 2.2
    Major update to the next-gen terrain generator for games and VFX adds 10 new nodes, and updates over 20 more. Check out the new features.
    Like
    Love
    Wow
    Sad
    Angry
    128
    1 Comments 0 Shares 0 Reviews
  • Autodesk a décidé de mettre à jour sa collection d'outils 3D pour 2026. Ça sonne plutôt bien, je suppose. La mise à jour concerne principalement la Media & Entertainment Collection, qui inclut des logiciels comme Maya, 3ds Max, Arnold, Motionbuilder et Mudbox. En gros, rien de très nouveau ici.

    Il y a aussi une quinzaine de nodes Bifrost, mais qui s’en soucie vraiment ? Ah oui, et Golaem fait son entrée dans la collection. C'est un plugin pour Maya, apparemment. Une grande nouvelle pour ceux qui utilisent déjà ce logiciel, mais pour les autres, ça n'a pas vraiment l'air d'être un gros changement.

    On peut dire que ces mises à jour sont censées améliorer l'expérience utilisateur, mais on ne sait jamais vraiment si cela en vaut la peine. On espère juste que ça ne causera pas trop de bugs. Qui a vraiment le temps de s’ennuyer avec ça ? Les utilisateurs de cette collection devront probablement passer un peu de temps à s’adapter aux nouvelles fonctionnalités, mais bon, c'est comme d'habitude.

    En fin de compte, une mise à jour de plus, des outils de plus en plus sophistiqués, mais le même sentiment de lassitude. On se demande si tout cela en vaut vraiment la peine. Alors, bonne chance à ceux qui vont plonger dans cette version 2026. Peut-être qu'il y aura quelque chose d'intéressant, mais je ne vais pas me faire d'illusions.

    #Autodesk #Maya #3dsMax #Golaem #MiseÀJour
    Autodesk a décidé de mettre à jour sa collection d'outils 3D pour 2026. Ça sonne plutôt bien, je suppose. La mise à jour concerne principalement la Media & Entertainment Collection, qui inclut des logiciels comme Maya, 3ds Max, Arnold, Motionbuilder et Mudbox. En gros, rien de très nouveau ici. Il y a aussi une quinzaine de nodes Bifrost, mais qui s’en soucie vraiment ? Ah oui, et Golaem fait son entrée dans la collection. C'est un plugin pour Maya, apparemment. Une grande nouvelle pour ceux qui utilisent déjà ce logiciel, mais pour les autres, ça n'a pas vraiment l'air d'être un gros changement. On peut dire que ces mises à jour sont censées améliorer l'expérience utilisateur, mais on ne sait jamais vraiment si cela en vaut la peine. On espère juste que ça ne causera pas trop de bugs. Qui a vraiment le temps de s’ennuyer avec ça ? Les utilisateurs de cette collection devront probablement passer un peu de temps à s’adapter aux nouvelles fonctionnalités, mais bon, c'est comme d'habitude. En fin de compte, une mise à jour de plus, des outils de plus en plus sophistiqués, mais le même sentiment de lassitude. On se demande si tout cela en vaut vraiment la peine. Alors, bonne chance à ceux qui vont plonger dans cette version 2026. Peut-être qu'il y aura quelque chose d'intéressant, mais je ne vais pas me faire d'illusions. #Autodesk #Maya #3dsMax #Golaem #MiseÀJour
    Autodesk met à jour ses outils 3D : quoi de neuf pour 2026 ?
    Autodesk annonce une mise à jour de son Autodesk Media & Entertainment Collection, qui passe en version 2026. Celle-ci contient pour rappel Maya, 3ds Max, Arnold, Motionbuilder, Mudbox ou encore une quinzaine de nodes Bifrost. Golaem rejoint la s
    Like
    Love
    Wow
    Sad
    Angry
    370
    1 Comments 0 Shares 0 Reviews
  • Animate the Smart Way in Blender (Procedural Animation Tutorial) #b3d

    In this video, Louis du Montshows how to animate objects using Geometry Node, unlocking quick control and variation which scales.
    ⇨ Robotic Planet:
    ⇨ Project Files:

    CHAPTERS
    00:00 - Intro
    00:33 - Joining Objects
    04:01 - Ambient Ship Motion
    09:04 - Ambient Laser Motion
    11:06 - Disc Rotation
    12:26 - Using Group Inputs
    15:02 - Outro

    MY SYSTEM
    CPU: Ryzen 5900x
    GPU: GeForce RTX 3090
    RAM: 96 GB

    FOLLOW CG BOOST
    ⇨ X:
    ⇨ Instagram: /
    ⇨ Web: /
    #animate #smart #way #blender #procedural
    Animate the Smart Way in Blender (Procedural Animation Tutorial) #b3d
    In this video, Louis du Montshows how to animate objects using Geometry Node, unlocking quick control and variation which scales. ⇨ Robotic Planet: ⇨ Project Files: CHAPTERS 00:00 - Intro 00:33 - Joining Objects 04:01 - Ambient Ship Motion 09:04 - Ambient Laser Motion 11:06 - Disc Rotation 12:26 - Using Group Inputs 15:02 - Outro MY SYSTEM CPU: Ryzen 5900x GPU: GeForce RTX 3090 RAM: 96 GB FOLLOW CG BOOST ⇨ X: ⇨ Instagram: / ⇨ Web: / #animate #smart #way #blender #procedural
    WWW.YOUTUBE.COM
    Animate the Smart Way in Blender (Procedural Animation Tutorial) #b3d
    In this video, Louis du Mont (@ldm) shows how to animate objects using Geometry Node, unlocking quick control and variation which scales. ⇨ Robotic Planet: https://cgboost.link/robotic-planet-449836 ⇨ Project Files: https://www.cgboost.com/resources CHAPTERS 00:00 - Intro 00:33 - Joining Objects 04:01 - Ambient Ship Motion 09:04 - Ambient Laser Motion 11:06 - Disc Rotation 12:26 - Using Group Inputs 15:02 - Outro MY SYSTEM CPU: Ryzen 5900x GPU: GeForce RTX 3090 RAM: 96 GB FOLLOW CG BOOST ⇨ X: https://twitter.com/cgboost ⇨ Instagram: https://www.instagram.com/cg_boost/ ⇨ Web: https://cgboost.com/
    Like
    Love
    Wow
    Sad
    Angry
    525
    0 Comments 0 Shares 0 Reviews
  • Inside the thinking behind Frontify Futures' standout brand identity

    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds.
    This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape.
    But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material.
    Endless variation
    What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project.

    "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser."
    Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system."
    One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg."

    Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion."
    Core Philosophy
    In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.'
    "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past."

    The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?"
    Stripped back and skeletal typography
    The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type."
    Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form."

    As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly."
    In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font."
    Design Process
    The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base.
    "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content.
    "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together."

    In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality.
    "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point.
    A provocation for the industry
    In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking.
    For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium.
    This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    #inside #thinking #behind #frontify #futures039
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone. #inside #thinking #behind #frontify #futures039
    WWW.CREATIVEBOOM.COM
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azurio (Cranny) from Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    0 Comments 0 Shares 0 Reviews
  • Unity Technical VFX Artist at No Brakes Games

    Unity Technical VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote3 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are looking for a Unity Technical VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITION - Vilnius, Lithuania or RemoteRole Overview:As a Unity Technical Artist specializing in VFX and rendering, you will develop and optimize real-time visual effects, create advanced shaders and materials, and ensure a balance between visual fidelity and performance. You will solve complex technical challenges, optimizing effects for real-time execution while collaborating with artists, designers, and engineers to push Unity’s rendering capabilities to the next level.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Create scalable shaders and materials for water, fog, atmospheric effects, and dynamic lighting.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Document best practices and educate the team on efficient asset and shader workflows.Collaborate with engineers to develop and implement custom rendering solutions.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a Technical Artist in game development, with a focus on Unity.Strong understanding of Unity’s rendering pipelineand shader development.Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Hands-on experience with real-time lighting and atmospheric effects.Ability to debug and profile complex rendering issues effectively.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development.Knowledge of C#, Python, or C++ for extending Unity’s capabilities.Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #unity #technical #vfx #artist #brakes
    Unity Technical VFX Artist at No Brakes Games
    Unity Technical VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote3 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are looking for a Unity Technical VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITION - Vilnius, Lithuania or RemoteRole Overview:As a Unity Technical Artist specializing in VFX and rendering, you will develop and optimize real-time visual effects, create advanced shaders and materials, and ensure a balance between visual fidelity and performance. You will solve complex technical challenges, optimizing effects for real-time execution while collaborating with artists, designers, and engineers to push Unity’s rendering capabilities to the next level.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Create scalable shaders and materials for water, fog, atmospheric effects, and dynamic lighting.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Document best practices and educate the team on efficient asset and shader workflows.Collaborate with engineers to develop and implement custom rendering solutions.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a Technical Artist in game development, with a focus on Unity.Strong understanding of Unity’s rendering pipelineand shader development.Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Hands-on experience with real-time lighting and atmospheric effects.Ability to debug and profile complex rendering issues effectively.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development.Knowledge of C#, Python, or C++ for extending Unity’s capabilities.Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com Create Your Profile — Game companies can contact you with their relevant job openings. Apply #unity #technical #vfx #artist #brakes
    Unity Technical VFX Artist at No Brakes Games
    Unity Technical VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote3 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are looking for a Unity Technical VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITION - Vilnius, Lithuania or RemoteRole Overview:As a Unity Technical Artist specializing in VFX and rendering, you will develop and optimize real-time visual effects, create advanced shaders and materials, and ensure a balance between visual fidelity and performance. You will solve complex technical challenges, optimizing effects for real-time execution while collaborating with artists, designers, and engineers to push Unity’s rendering capabilities to the next level.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Create scalable shaders and materials for water, fog, atmospheric effects, and dynamic lighting.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Document best practices and educate the team on efficient asset and shader workflows.Collaborate with engineers to develop and implement custom rendering solutions.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a Technical Artist in game development, with a focus on Unity.Strong understanding of Unity’s rendering pipeline (HDRP, URP, Built-in) and shader development (HLSL, Shader Graph).Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Hands-on experience with real-time lighting and atmospheric effects.Ability to debug and profile complex rendering issues effectively.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development (PC, console, mobile, VR/AR).Knowledge of C#, Python, or C++ for extending Unity’s capabilities.Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Comments 0 Shares 0 Reviews
  • VFX Artist at No Brakes Games

    VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote2 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are now looking for an VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITIONRole Overview:As a VFX Artist, you will develop and optimize real-time visual effects and ensure a balance between visual fidelity and performance.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a VFX Artist in game development, with a focus on Unity.Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development.Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #vfx #artist #brakes #games
    VFX Artist at No Brakes Games
    VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote2 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are now looking for an VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITIONRole Overview:As a VFX Artist, you will develop and optimize real-time visual effects and ensure a balance between visual fidelity and performance.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a VFX Artist in game development, with a focus on Unity.Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development.Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com Create Your Profile — Game companies can contact you with their relevant job openings. Apply #vfx #artist #brakes #games
    VFX Artist at No Brakes Games
    VFX ArtistNo Brakes GamesVilnius, Lithuania or Remote2 hours agoApplyWe are No Brakes Games, the creators of Human Fall Flat.We are now looking for an VFX Artist to join our team to work on Human Fall Flat 2.FULL-TIME POSITIONRole Overview:As a VFX Artist, you will develop and optimize real-time visual effects and ensure a balance between visual fidelity and performance.Responsibilities:Develop and optimize real-time VFX solutions that are both visually striking and performant.Debug and resolve VFX performance bottlenecks using Unity Profiler, RenderDoc, and other tools.Optimize particle systems, volumetric effects, and GPU simulations for multi-platform performance.Stay updated on Unity’s latest advancements in rendering, HDRP, and the Visual Effect Graph.Requirements:2+ years of experience as a VFX Artist in game development, with a focus on Unity.Experience developing performance-conscious visual effects, including particle systems, volumetric lighting, and dynamic environmental effects.Proficiency in GPU/CPU optimization techniques, LODs for VFX.Excellent communication skills and ability to work collaboratively within a multi-disciplinary team.A flexible, R&D-driven mindset, able to iterate quickly in both prototyping and production environments.Nice-to-Have:Experience working on at least one released game project.Experience with Unity HDRP and SRP.Experience with multi-platform development (PC, console, mobile, VR/AR).Experience developing custom node-based tools or extending Unity’s Visual Effect Graph.Background in procedural animation, physics-based effects, or fluid simulations.Apply today by sending your Portfolio & CV to jobs@nobrakesgames.com Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Comments 0 Shares 0 Reviews
  • How to create simple procedural animations using Geometry Nodes. #b3d #blender3d #geometrynodes

    Here’s a Blender tip by Louis du Mont on how to create simple procedural animations using Geometry Nodes.

    Watch the full video:



    #b3d #blender3d #3d #3danimation #animation #geometrynodes
    #how #create #simple #procedural #animations
    How to create simple procedural animations using Geometry Nodes. #b3d #blender3d #geometrynodes
    Here’s a Blender tip by Louis du Mont on how to create simple procedural animations using Geometry Nodes. Watch the full video: #b3d #blender3d #3d #3danimation #animation #geometrynodes #how #create #simple #procedural #animations
    WWW.YOUTUBE.COM
    How to create simple procedural animations using Geometry Nodes. #b3d #blender3d #geometrynodes
    Here’s a Blender tip by Louis du Mont on how to create simple procedural animations using Geometry Nodes. Watch the full video: https://youtu.be/uV3LtA_KiAY #b3d #blender3d #3d #3danimation #animation #geometrynodes
    0 Comments 0 Shares 0 Reviews
  • CIOs baffled by ‘buzzwords, hype and confusion’ around AI

    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems.
    Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders.
    “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said.
    “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.”
    CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable.
    “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler.
    Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive.

    But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations.
    “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said.
    Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said.
    “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.”
    One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome.
    For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected.
    “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler.

    Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications.
    Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance.
    Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice.
    Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow.
    As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers.
    “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said.

    Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly.
    The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler.
    “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.”
    Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim.
    That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.”
    “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler.
    He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear.
    The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving.
    Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses.

    An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses.
    Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint.
    They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform.
    “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler.
    That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies.
    “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added.

    When AI agents behave in unexpected ways
    Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent.
    When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work.
    Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.”
    The developers banned Iris from sending an email to anyone other than the person who sent the original request.
    Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response.
    Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker.
    She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    #cios #baffled #buzzwords #hype #confusion
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.” #cios #baffled #buzzwords #hype #confusion
    WWW.COMPUTERWEEKLY.COM
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence (AI), according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a $1.5bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language models (LLMs) are not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takes [large quantities of] electricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unit [GPU] to do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    0 Comments 0 Shares 0 Reviews
More Results
CGShares https://cgshares.com