• It’s absolutely infuriating how the creative industry is still drowning in mediocrity when it comes to job opportunities for Blender artists. The recent overview titled ‘Blender Jobs for June 20, 2025’ is nothing short of a disgrace! What are we doing here? Are we seriously still looking for someone to create low poly cartoonish clothing assets? This is 2025, people! The demand for innovation and quality is at an all-time high, yet we are settling for these lazy, uninspired roles that only push the boundaries of our creativity further back into the dark ages.

    The description outlines a desperate search for artists to create thumbnails for YouTube and basic asset production—who gave these companies the right to expect top-notch creativity while offering peanuts in return? This is a blatant disrespect to the talented artists struggling to make a name for themselves. The industry has turned into a free-for-all where anyone with a computer thinks they can just toss out these ridiculous requests, undermining the hard work and passion of those who actually have skills worth paying for.

    “Stealth Startup” and “Pizza Party Productions”? Really? Is this some kind of joke? These names scream lack of professionalism and vision. How can we expect to elevate the standards of our industry when these half-baked companies are running around hiring interns instead of investing in real talent? It’s ludicrous! What’s next? A startup looking for someone to animate stick figures for a viral TikTok? Come on!

    Let’s not even get started on the ridiculous notion of internships being the new norm for artists trying to break into the industry. The term “3D Artist Intern” is a euphemism for “overworked and underpaid.” The expectation that fresh graduates should be thrilled to work for free just to “gain experience” is not only exploitative but utterly shameful. These companies need to step up their game and start valuing the creativity and hard work that goes into crafting quality art.

    Every time I scroll through these job postings, I feel my blood boil. Are we going to continue to allow this cycle of mediocrity to persist? It’s time for artists to take a stand and demand better. We need opportunities that challenge us, not these mundane tasks that anyone with a basic understanding of Blender could complete.

    We deserve to work in an environment that fosters creativity, innovation, and respect for our craft. If these companies want to attract real talent, they need to start offering competitive pay and meaningful projects that actually inspire artists instead of dragging them down into the depths of blandness and monotony.

    Wake up, industry! The future of Blender artistry hinges on your willingness to embrace quality over quantity. Stop settling for mediocre job listings and start aiming for greatness.

    #BlenderJobs #3DArtist #CreativityMatters #ArtIndustry #DemandBetter
    It’s absolutely infuriating how the creative industry is still drowning in mediocrity when it comes to job opportunities for Blender artists. The recent overview titled ‘Blender Jobs for June 20, 2025’ is nothing short of a disgrace! What are we doing here? Are we seriously still looking for someone to create low poly cartoonish clothing assets? This is 2025, people! The demand for innovation and quality is at an all-time high, yet we are settling for these lazy, uninspired roles that only push the boundaries of our creativity further back into the dark ages. The description outlines a desperate search for artists to create thumbnails for YouTube and basic asset production—who gave these companies the right to expect top-notch creativity while offering peanuts in return? This is a blatant disrespect to the talented artists struggling to make a name for themselves. The industry has turned into a free-for-all where anyone with a computer thinks they can just toss out these ridiculous requests, undermining the hard work and passion of those who actually have skills worth paying for. “Stealth Startup” and “Pizza Party Productions”? Really? Is this some kind of joke? These names scream lack of professionalism and vision. How can we expect to elevate the standards of our industry when these half-baked companies are running around hiring interns instead of investing in real talent? It’s ludicrous! What’s next? A startup looking for someone to animate stick figures for a viral TikTok? Come on! Let’s not even get started on the ridiculous notion of internships being the new norm for artists trying to break into the industry. The term “3D Artist Intern” is a euphemism for “overworked and underpaid.” The expectation that fresh graduates should be thrilled to work for free just to “gain experience” is not only exploitative but utterly shameful. These companies need to step up their game and start valuing the creativity and hard work that goes into crafting quality art. Every time I scroll through these job postings, I feel my blood boil. Are we going to continue to allow this cycle of mediocrity to persist? It’s time for artists to take a stand and demand better. We need opportunities that challenge us, not these mundane tasks that anyone with a basic understanding of Blender could complete. We deserve to work in an environment that fosters creativity, innovation, and respect for our craft. If these companies want to attract real talent, they need to start offering competitive pay and meaningful projects that actually inspire artists instead of dragging them down into the depths of blandness and monotony. Wake up, industry! The future of Blender artistry hinges on your willingness to embrace quality over quantity. Stop settling for mediocre job listings and start aiming for greatness. #BlenderJobs #3DArtist #CreativityMatters #ArtIndustry #DemandBetter
    Blender Jobs for June 20, 2025
    Here's an overview of the most recent Blender jobs on Blender Artists, ArtStation and 3djobs.xyz: Looking for someone to create some low poly cartoonish clothing asset for my character I'm looking for an artist to make me a Thumbnail for YouTube Vert
    Like
    Love
    Wow
    Angry
    Sad
    219
    1 Commentarii 0 Distribuiri 0 previzualizare
  • THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It

    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    #this #unexpected #rug #trend #taking
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok. #this #unexpected #rug #trend #taking
    WWW.HOUSEBEAUTIFUL.COM
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you design (or redesign) a room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application (such as functioning as a wall mural), offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collection (meaning “expression of my heart” in Hindi) comprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed (in the case of renters), a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    Like
    Love
    Wow
    Sad
    Angry
    465
    2 Commentarii 0 Distribuiri 0 previzualizare
  • Inside the thinking behind Frontify Futures' standout brand identity

    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds.
    This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape.
    But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material.
    Endless variation
    What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project.

    "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser."
    Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system."
    One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg."

    Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion."
    Core Philosophy
    In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.'
    "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past."

    The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?"
    Stripped back and skeletal typography
    The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type."
    Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form."

    As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly."
    In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font."
    Design Process
    The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base.
    "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content.
    "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together."

    In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality.
    "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point.
    A provocation for the industry
    In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking.
    For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium.
    This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    #inside #thinking #behind #frontify #futures039
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone. #inside #thinking #behind #frontify #futures039
    WWW.CREATIVEBOOM.COM
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azurio (Cranny) from Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Harnessing Silhouette for Dramatic Storytelling

    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling.

    The Art of Silhouettes
    A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories.
    Capturing the Perfect Silhouette
    Ideal Conditions

    Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes.
    Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography.

    Camera Settings

    Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette.
    Choose a narrower apertureto maintain clear, sharp outlines, ensuring your silhouette remains distinct.

    Composition Techniques for Dramatic Impact

    Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes.
    Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively.
    Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject.

    Enhancing Storytelling through Silhouettes
    Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama.
    Post-Processing Tips

    Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence.
    Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy.

    Creative Applications
    Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs.
    Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows.
    Extended reading: Creating depth and drama with moody photography
    The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px.
    #harnessing #silhouette #dramatic #storytelling
    Harnessing Silhouette for Dramatic Storytelling
    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling. The Art of Silhouettes A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories. Capturing the Perfect Silhouette Ideal Conditions Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes. Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography. Camera Settings Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette. Choose a narrower apertureto maintain clear, sharp outlines, ensuring your silhouette remains distinct. Composition Techniques for Dramatic Impact Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes. Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively. Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject. Enhancing Storytelling through Silhouettes Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama. Post-Processing Tips Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence. Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy. Creative Applications Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs. Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows. Extended reading: Creating depth and drama with moody photography The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px. #harnessing #silhouette #dramatic #storytelling
    ISO.500PX.COM
    Harnessing Silhouette for Dramatic Storytelling
    Silhouette photography has the unique power to convey powerful emotions and dramatic narratives by emphasizing shape and form over details and textures. By creatively harnessing silhouettes, photographers can captivate viewers’ imaginations, prompting them to fill in unseen details and engage deeply with the story. Here’s how to master silhouettes to elevate your photographic storytelling. The Art of Silhouettes A silhouette is created when your subject is backlit, making the subject appear completely dark against a lighter background. Silhouettes rely heavily on strong outlines, instantly recognizable shapes, and clear gestures to tell compelling stories. Capturing the Perfect Silhouette Ideal Conditions Sunrise and sunset offer low-angle sunlight, providing ideal lighting conditions to create dramatic silhouettes. Artificial light sources like urban lights, windows, and doorways offer unique creative opportunities for silhouette photography. Camera Settings Adjust your exposure for the brightest part of the scene, usually the background, to render your subject as a dark silhouette. Choose a narrower aperture (higher f-number) to maintain clear, sharp outlines, ensuring your silhouette remains distinct. Composition Techniques for Dramatic Impact Select subjects with strong, recognizable shapes—human figures, animals, architecture, and trees often create compelling silhouettes. Encourage subjects to use clear gestures or dynamic poses to communicate emotion or action effectively. Utilize negative space to emphasize silhouettes, creating visual balance and directing viewers’ attention to the subject. Enhancing Storytelling through Silhouettes Silhouettes simplify your scene, focusing viewers’ attention entirely on the emotional or narrative essence of your image. Obscuring details introduces an element of mystery, inviting viewers to engage actively with your photograph. Silhouettes naturally evoke emotional responses, effectively conveying solitude, contemplation, love, or drama. Post-Processing Tips Enhance contrast and deepen blacks to emphasize your silhouette, strengthening its dramatic presence. Apply subtle color grading or tone adjustments to amplify mood—warmer tones evoke romance or nostalgia, while cooler tones suggest tranquility or melancholy. Creative Applications Silhouettes are versatile across many genres, including intimate portraits, dynamic street photography, and striking nature and wildlife imagery. Using silhouettes thoughtfully allows photographers to communicate powerful forms and actions, creating graphically strong and emotionally resonant photographs. Silhouettes offer photographers an exceptional tool for impactful storytelling, combining simplicity with emotional intensity. By mastering essential techniques, thoughtful composition, and creative execution, you can craft compelling visual narratives that resonate deeply with your audience. Explore silhouettes in your photography, and uncover the profound storytelling power hidden within shadows. Extended reading: Creating depth and drama with moody photography The post Harnessing Silhouette for Dramatic Storytelling appeared first on 500px.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation

    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively.
    Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge.

    Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows.
    Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner.

    The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity.
    The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models.

    By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research.

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation
    #bytedance #researchers #introduce #detailflow #coarsetofine
    ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation
    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively. Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge. Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows. Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner. The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity. The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models. By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation #bytedance #researchers #introduce #detailflow #coarsetofine
    WWW.MARKTECHPOST.COM
    ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation
    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively. Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge. Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows. Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner. The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity. The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models. By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation
    Like
    Love
    Wow
    Sad
    Angry
    821
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com