• Decades ago, concrete overtook steel as the predominant structural material for towers worldwide—the Skyscraper Museum’s new exhibition examines why and how

    “Is that concrete all around, or is it in my head?” asked Ian Hunter in “All the Young Dudes,” the song David Bowie wrote for Mott the Hoople in 1972. Concrete is all around us, and we haven’t quite wrapped our heads around it. It’s one of the indispensable materials of modernity; as we try to decarbonize the built environment, it’s part of the problem, and innovations in its composition may become part of the solution. Understanding its history more clearly, the Skyscraper Museum’s new exhibition in Manhattan implies, just might help us employ it better.

    Concrete is “the second most used substance in the world, after water,” the museum’s founder/director/curator Carol Willis told AN during a recent visit. For plasticity, versatility, and compressive strength, reinforced concrete is hard to beat, though its performance is more problematic when assessed by the metric of embodied and operational carbon, a consideration the exhibition acknowledges up front. In tall construction, concrete has become nearly hegemonic, yet its central role, contend Willis and co-curator Thomas Leslie, formerly of Foster + Partners and now a professor at the University of Illinois, Urbana-Champaign, is underrecognized by the public and by mainstream architectural history. The current exhibition aims to change that perception.
    The Skyscraper Museum in Lower Manhattan features an exhibition, The Modern Concrete Skyscraper, which examines the history of material choices in building tall towers.The Modern Concrete Skyscraper examines the history of tall towers’ structural material choices, describing a transition from the early dominance of steel frames to the contemporary condition, in which most large buildings rely on concrete. This change did not happen instantly or for any single reason but through a combination of technical and economic factors, including innovations by various specialists, well-recognized and otherwise; the availability of high-quality limestone deposits near Chicago; and the differential development of materials industries in nations whose architecture grew prominent in recent decades. As supertalls reach ever higher—in the global race for official height rankings by the Council on Tall Buildings and Urban Habitatand national, corporate, or professional bragging rights—concrete’s dominance may not be permanent in that sector, given the challenge of pumping the material beyond a certain height.For the moment, however, concrete is ahead of its chief competitors, steel andtimber. Regardless of possible promotional inferences, Willis said, “we did not work with the industry in any way for this exhibition.”

    “The invention of steel and the grid of steel and the skeleton frame is only the first chapter of the history of the skyscraper,” Willis explained. “The second chapter, and the one that we’re in now, is concrete. Surprisingly, no one had ever told that story of the skyscraper today with a continuous narrative.” The exhibition traces the use of concrete back to the ancient Roman combination of aggregate and pozzolana—the chemical formula for which was “largely lost with the fall of the Roman Empire,” though some Byzantine and medieval structures approximated it. From there, the show explores comparable materials’ revival in 18th-century England, the patenting of Portland cement by Leeds builder Joseph Aspdin in 1824, the proof-of-concept concrete house by François Coignet in 1856, and the pivotal development of rebar in the mid-19th century, with overdue attention to Ernest Ransome’s 1903 Ingalls Building in Cincinnati, then the world’s tallest concrete building at 15 stories and arguably the first concrete skyscraper.
    The exhibition includes a timeline that depicts concrete’s origins in Rome to its contemporary use in skyscraper construction.Baker’s lectures, Willis reported, sometimes pose a deceptively simple question: “‘What is a skyscraper?’ In 1974, when the World Trade Center and Sears Tower are just finished, you would say it’s a very tall building that is built of steel, an office building in North America. But if you ask that same question today, the answer is: It’s a building that is mixed-use, constructed of concrete, andin Asia or the Middle East.” The exhibition organizes the history of concrete towers by eras of engineering innovation, devoting special attention to the 19th- and early-20th-century “patent era” of Claude Allen Porter Turnerand Henry Chandlee Turner, Ransome, and François Hennebique. In the postwar era, “concrete comes out onto the surfaceboth a structural material and aesthetic.” Brutalism, perhaps to some observers’ surprise, “does not figure very large in high-rise design,” Willis said, except for Paul Rudolph’s Tracey Towers in the Bronx. The exhibition, however, devotes considerable attention to the work of Pier Luigi Nervi, Bertrand Goldberg, and SOM’s Fazlur Khan, pioneer of the structural tube system in the 1960s and 1970s—followed by the postmodernist 1980s, when concrete could express either engineering values or ornamentation.
    The exhibition highlights a number of concrete towers, including Paul Rudolph’s Tracey Towers in the Bronx.“In the ’90s, there were material advances in engineering analysis and computerization that helped to predict performance, and so buildings can get taller and taller,” Willis said. The current era, if one looks to CTBUH rankings, is dominated by the supertalls seen in Dubai, Shanghai, and Kuala Lumpur, after the Petronas Towers“took the title of world’s tallest building from North America for the first time and traumatized everybody about that.” The previous record holder, Chicago’s SearsTower, comprised steel structural tubes on concrete caissons; with Petronas, headquarters of Malaysia’s national petroleum company of that name, a strong concrete industry was represented but a strong national steel industry was lacking, and as Willis frequently says, form follows finances. In any event, by the ’90s concrete was already becoming the standard material for supertalls, particularly on soft-soiled sites like Shanghai, where its water resistance and compressive strength are well suited to foundation construction. Its plasticity is also well suited to complex forms like the triangular Burj, Kuala Lumpur’s Merdeka 118, andthe even taller Jeddah Tower, designed to “confuse the wind,” shed vortices, and manage wind forces. Posing the same question Louis Kahn asked about the intentions of a brick, Willis said, with concrete “the answer is: anything you want.”

    The exhibition is front-loaded with scholarly material, presenting eight succinct yet informative wall texts on the timeline of concrete construction. The explanatory material is accompanied by ample photographs as well as structural models on loan from SOM, Pelli Clarke & Partners, and other firms. Some materials are repurposed from the museum’s previous shows, particularly Supertall!and Sky High and the Logic of Luxury. The models allow close examination of the Burj Khalifa, Petronas Towers, Jin Mao Tower, Merdeka 118, and others, including two unbuilt Chicago projects that would have exceeded 2,000 feet: the Miglin-Beitler Skyneedleand 7 South Dearborn. The Burj, Willis noted, was all structure and no facade for a time: When its curtain-wall manufacturer, Schmidlin, went bankrupt in 2006, it “ended up going to 100 stories without having a stitch of glass on it,” temporarily becoming a “1:1 scale model of the structural system up to 100 stories.” Its prominence justifies its appearance here in two models, including one from RWDI’s wind-tunnel studies.
    Eero Saarinen’s only skyscraper, built for CBS in 1965 and also known as “Black Rock,” under construction in New York City.The exhibition opened in March, with plans to stay up at least through October, with accompanying lectures and panels to be announced on the museum’s website. Though the exhibition’s full textual and graphic content is available online, the physical models alone are worth a trip to the Battery Park City headquarters.
    Intriguing questions arise from the exhibition without easy answers, setting the table for lively discussion and debate. One is whether the patenting of innovations like Ransome bar and the Système Hennebique incentivized technological progress or hindered useful technology transfer. Willis speculated, “Did the fact that there were inventions and patents mean that competition was discouraged, that the competition was only in the realm of business, rather than advancing the material?” A critical question is whether research into the chemistry of concrete, including MIT’s 2023 report on the self-healing properties of Roman pozzolana and proliferating claims about “green concrete” using alternatives to Portland cement, can lead to new types of the material with improved durability and lower emissions footprints. This exhibition provides a firm foundation in concrete’s fascinating history, opening space for informed speculation about its future.
    Bill Millard is a regular contributor to AN.
    #decades #ago #concrete #overtook #steel
    Decades ago, concrete overtook steel as the predominant structural material for towers worldwide—the Skyscraper Museum’s new exhibition examines why and how
    “Is that concrete all around, or is it in my head?” asked Ian Hunter in “All the Young Dudes,” the song David Bowie wrote for Mott the Hoople in 1972. Concrete is all around us, and we haven’t quite wrapped our heads around it. It’s one of the indispensable materials of modernity; as we try to decarbonize the built environment, it’s part of the problem, and innovations in its composition may become part of the solution. Understanding its history more clearly, the Skyscraper Museum’s new exhibition in Manhattan implies, just might help us employ it better. Concrete is “the second most used substance in the world, after water,” the museum’s founder/director/curator Carol Willis told AN during a recent visit. For plasticity, versatility, and compressive strength, reinforced concrete is hard to beat, though its performance is more problematic when assessed by the metric of embodied and operational carbon, a consideration the exhibition acknowledges up front. In tall construction, concrete has become nearly hegemonic, yet its central role, contend Willis and co-curator Thomas Leslie, formerly of Foster + Partners and now a professor at the University of Illinois, Urbana-Champaign, is underrecognized by the public and by mainstream architectural history. The current exhibition aims to change that perception. The Skyscraper Museum in Lower Manhattan features an exhibition, The Modern Concrete Skyscraper, which examines the history of material choices in building tall towers.The Modern Concrete Skyscraper examines the history of tall towers’ structural material choices, describing a transition from the early dominance of steel frames to the contemporary condition, in which most large buildings rely on concrete. This change did not happen instantly or for any single reason but through a combination of technical and economic factors, including innovations by various specialists, well-recognized and otherwise; the availability of high-quality limestone deposits near Chicago; and the differential development of materials industries in nations whose architecture grew prominent in recent decades. As supertalls reach ever higher—in the global race for official height rankings by the Council on Tall Buildings and Urban Habitatand national, corporate, or professional bragging rights—concrete’s dominance may not be permanent in that sector, given the challenge of pumping the material beyond a certain height.For the moment, however, concrete is ahead of its chief competitors, steel andtimber. Regardless of possible promotional inferences, Willis said, “we did not work with the industry in any way for this exhibition.” “The invention of steel and the grid of steel and the skeleton frame is only the first chapter of the history of the skyscraper,” Willis explained. “The second chapter, and the one that we’re in now, is concrete. Surprisingly, no one had ever told that story of the skyscraper today with a continuous narrative.” The exhibition traces the use of concrete back to the ancient Roman combination of aggregate and pozzolana—the chemical formula for which was “largely lost with the fall of the Roman Empire,” though some Byzantine and medieval structures approximated it. From there, the show explores comparable materials’ revival in 18th-century England, the patenting of Portland cement by Leeds builder Joseph Aspdin in 1824, the proof-of-concept concrete house by François Coignet in 1856, and the pivotal development of rebar in the mid-19th century, with overdue attention to Ernest Ransome’s 1903 Ingalls Building in Cincinnati, then the world’s tallest concrete building at 15 stories and arguably the first concrete skyscraper. The exhibition includes a timeline that depicts concrete’s origins in Rome to its contemporary use in skyscraper construction.Baker’s lectures, Willis reported, sometimes pose a deceptively simple question: “‘What is a skyscraper?’ In 1974, when the World Trade Center and Sears Tower are just finished, you would say it’s a very tall building that is built of steel, an office building in North America. But if you ask that same question today, the answer is: It’s a building that is mixed-use, constructed of concrete, andin Asia or the Middle East.” The exhibition organizes the history of concrete towers by eras of engineering innovation, devoting special attention to the 19th- and early-20th-century “patent era” of Claude Allen Porter Turnerand Henry Chandlee Turner, Ransome, and François Hennebique. In the postwar era, “concrete comes out onto the surfaceboth a structural material and aesthetic.” Brutalism, perhaps to some observers’ surprise, “does not figure very large in high-rise design,” Willis said, except for Paul Rudolph’s Tracey Towers in the Bronx. The exhibition, however, devotes considerable attention to the work of Pier Luigi Nervi, Bertrand Goldberg, and SOM’s Fazlur Khan, pioneer of the structural tube system in the 1960s and 1970s—followed by the postmodernist 1980s, when concrete could express either engineering values or ornamentation. The exhibition highlights a number of concrete towers, including Paul Rudolph’s Tracey Towers in the Bronx.“In the ’90s, there were material advances in engineering analysis and computerization that helped to predict performance, and so buildings can get taller and taller,” Willis said. The current era, if one looks to CTBUH rankings, is dominated by the supertalls seen in Dubai, Shanghai, and Kuala Lumpur, after the Petronas Towers“took the title of world’s tallest building from North America for the first time and traumatized everybody about that.” The previous record holder, Chicago’s SearsTower, comprised steel structural tubes on concrete caissons; with Petronas, headquarters of Malaysia’s national petroleum company of that name, a strong concrete industry was represented but a strong national steel industry was lacking, and as Willis frequently says, form follows finances. In any event, by the ’90s concrete was already becoming the standard material for supertalls, particularly on soft-soiled sites like Shanghai, where its water resistance and compressive strength are well suited to foundation construction. Its plasticity is also well suited to complex forms like the triangular Burj, Kuala Lumpur’s Merdeka 118, andthe even taller Jeddah Tower, designed to “confuse the wind,” shed vortices, and manage wind forces. Posing the same question Louis Kahn asked about the intentions of a brick, Willis said, with concrete “the answer is: anything you want.” The exhibition is front-loaded with scholarly material, presenting eight succinct yet informative wall texts on the timeline of concrete construction. The explanatory material is accompanied by ample photographs as well as structural models on loan from SOM, Pelli Clarke & Partners, and other firms. Some materials are repurposed from the museum’s previous shows, particularly Supertall!and Sky High and the Logic of Luxury. The models allow close examination of the Burj Khalifa, Petronas Towers, Jin Mao Tower, Merdeka 118, and others, including two unbuilt Chicago projects that would have exceeded 2,000 feet: the Miglin-Beitler Skyneedleand 7 South Dearborn. The Burj, Willis noted, was all structure and no facade for a time: When its curtain-wall manufacturer, Schmidlin, went bankrupt in 2006, it “ended up going to 100 stories without having a stitch of glass on it,” temporarily becoming a “1:1 scale model of the structural system up to 100 stories.” Its prominence justifies its appearance here in two models, including one from RWDI’s wind-tunnel studies. Eero Saarinen’s only skyscraper, built for CBS in 1965 and also known as “Black Rock,” under construction in New York City.The exhibition opened in March, with plans to stay up at least through October, with accompanying lectures and panels to be announced on the museum’s website. Though the exhibition’s full textual and graphic content is available online, the physical models alone are worth a trip to the Battery Park City headquarters. Intriguing questions arise from the exhibition without easy answers, setting the table for lively discussion and debate. One is whether the patenting of innovations like Ransome bar and the Système Hennebique incentivized technological progress or hindered useful technology transfer. Willis speculated, “Did the fact that there were inventions and patents mean that competition was discouraged, that the competition was only in the realm of business, rather than advancing the material?” A critical question is whether research into the chemistry of concrete, including MIT’s 2023 report on the self-healing properties of Roman pozzolana and proliferating claims about “green concrete” using alternatives to Portland cement, can lead to new types of the material with improved durability and lower emissions footprints. This exhibition provides a firm foundation in concrete’s fascinating history, opening space for informed speculation about its future. Bill Millard is a regular contributor to AN. #decades #ago #concrete #overtook #steel
    WWW.ARCHPAPER.COM
    Decades ago, concrete overtook steel as the predominant structural material for towers worldwide—the Skyscraper Museum’s new exhibition examines why and how
    “Is that concrete all around, or is it in my head?” asked Ian Hunter in “All the Young Dudes,” the song David Bowie wrote for Mott the Hoople in 1972. Concrete is all around us, and we haven’t quite wrapped our heads around it. It’s one of the indispensable materials of modernity; as we try to decarbonize the built environment, it’s part of the problem, and innovations in its composition may become part of the solution. Understanding its history more clearly, the Skyscraper Museum’s new exhibition in Manhattan implies, just might help us employ it better. Concrete is “the second most used substance in the world, after water,” the museum’s founder/director/curator Carol Willis told AN during a recent visit. For plasticity, versatility, and compressive strength, reinforced concrete is hard to beat, though its performance is more problematic when assessed by the metric of embodied and operational carbon, a consideration the exhibition acknowledges up front. In tall construction, concrete has become nearly hegemonic, yet its central role, contend Willis and co-curator Thomas Leslie, formerly of Foster + Partners and now a professor at the University of Illinois, Urbana-Champaign, is underrecognized by the public and by mainstream architectural history. The current exhibition aims to change that perception. The Skyscraper Museum in Lower Manhattan features an exhibition, The Modern Concrete Skyscraper, which examines the history of material choices in building tall towers. (Courtesy the Skyscraper Museum) The Modern Concrete Skyscraper examines the history of tall towers’ structural material choices, describing a transition from the early dominance of steel frames to the contemporary condition, in which most large buildings rely on concrete. This change did not happen instantly or for any single reason but through a combination of technical and economic factors, including innovations by various specialists, well-recognized and otherwise; the availability of high-quality limestone deposits near Chicago; and the differential development of materials industries in nations whose architecture grew prominent in recent decades. As supertalls reach ever higher—in the global race for official height rankings by the Council on Tall Buildings and Urban Habitat (CTBUH) and national, corporate, or professional bragging rights—concrete’s dominance may not be permanent in that sector, given the challenge of pumping the material beyond a certain height. (The 2,717-foot Burj Khalifa, formerly Burj Dubai, uses concrete up to 1,987 and steel above that point; Willis quotes SOM’s William Baker describing it as “the tallest steel building with a concrete foundation of 156 stories.”) For the moment, however, concrete is ahead of its chief competitors, steel and (on a smaller scale) timber. Regardless of possible promotional inferences, Willis said, “we did not work with the industry in any way for this exhibition.” “The invention of steel and the grid of steel and the skeleton frame is only the first chapter of the history of the skyscraper,” Willis explained. “The second chapter, and the one that we’re in now, is concrete. Surprisingly, no one had ever told that story of the skyscraper today with a continuous narrative.” The exhibition traces the use of concrete back to the ancient Roman combination of aggregate and pozzolana—the chemical formula for which was “largely lost with the fall of the Roman Empire,” though some Byzantine and medieval structures approximated it. From there, the show explores comparable materials’ revival in 18th-century England, the patenting of Portland cement by Leeds builder Joseph Aspdin in 1824, the proof-of-concept concrete house by François Coignet in 1856, and the pivotal development of rebar in the mid-19th century, with overdue attention to Ernest Ransome’s 1903 Ingalls Building in Cincinnati, then the world’s tallest concrete building at 15 stories and arguably the first concrete skyscraper. The exhibition includes a timeline that depicts concrete’s origins in Rome to its contemporary use in skyscraper construction. (Courtesy the Skyscraper Museum) Baker’s lectures, Willis reported, sometimes pose a deceptively simple question: “‘What is a skyscraper?’ In 1974, when the World Trade Center and Sears Tower are just finished, you would say it’s a very tall building that is built of steel, an office building in North America. But if you ask that same question today, the answer is: It’s a building that is mixed-use, constructed of concrete, and [located] in Asia or the Middle East.” The exhibition organizes the history of concrete towers by eras of engineering innovation, devoting special attention to the 19th- and early-20th-century “patent era” of Claude Allen Porter Turner (pioneer in flat-slab flooring and mushroom columns) and Henry Chandlee Turner (founder of Turner Construction), Ransome (who patented twisted-iron rebar), and François Hennebique (known for the re-inforced concrete system exemplified by Liverpool’s Royal Liver Building, the world’s tallest concrete office building when completed in 1911). In the postwar era, “concrete comes out onto the surface [as] both a structural material and aesthetic.” Brutalism, perhaps to some observers’ surprise, “does not figure very large in high-rise design,” Willis said, except for Paul Rudolph’s Tracey Towers in the Bronx. The exhibition, however, devotes considerable attention to the work of Pier Luigi Nervi, Bertrand Goldberg (particularly Marina City), and SOM’s Fazlur Khan, pioneer of the structural tube system in the 1960s and 1970s—followed by the postmodernist 1980s, when concrete could express either engineering values or ornamentation. The exhibition highlights a number of concrete towers, including Paul Rudolph’s Tracey Towers in the Bronx. (Courtesy the Skyscraper Museum) “In the ’90s, there were material advances in engineering analysis and computerization that helped to predict performance, and so buildings can get taller and taller,” Willis said. The current era, if one looks to CTBUH rankings, is dominated by the supertalls seen in Dubai, Shanghai, and Kuala Lumpur, after the Petronas Towers (1998) “took the title of world’s tallest building from North America for the first time and traumatized everybody about that.” The previous record holder, Chicago’s Sears (now Willis) Tower, comprised steel structural tubes on concrete caissons; with Petronas, headquarters of Malaysia’s national petroleum company of that name, a strong concrete industry was represented but a strong national steel industry was lacking, and as Willis frequently says, form follows finances. In any event, by the ’90s concrete was already becoming the standard material for supertalls, particularly on soft-soiled sites like Shanghai, where its water resistance and compressive strength are well suited to foundation construction. Its plasticity is also well suited to complex forms like the triangular Burj, Kuala Lumpur’s Merdeka 118, and (if eventually completed) the even taller Jeddah Tower, designed to “confuse the wind,” shed vortices, and manage wind forces. Posing the same question Louis Kahn asked about the intentions of a brick, Willis said, with concrete “the answer is: anything you want.” The exhibition is front-loaded with scholarly material, presenting eight succinct yet informative wall texts on the timeline of concrete construction. The explanatory material is accompanied by ample photographs as well as structural models on loan from SOM, Pelli Clarke & Partners, and other firms. Some materials are repurposed from the museum’s previous shows, particularly Supertall! (2011–12) and Sky High and the Logic of Luxury (2013–14). The models allow close examination of the Burj Khalifa, Petronas Towers, Jin Mao Tower, Merdeka 118, and others, including two unbuilt Chicago projects that would have exceeded 2,000 feet: the Miglin-Beitler Skyneedle (Cesar Pelli/Thornton Tomasetti) and 7 South Dearborn (SOM). The Burj, Willis noted, was all structure and no facade for a time: When its curtain-wall manufacturer, Schmidlin, went bankrupt in 2006, it “ended up going to 100 stories without having a stitch of glass on it,” temporarily becoming a “1:1 scale model of the structural system up to 100 stories.” Its prominence justifies its appearance here in two models, including one from RWDI’s wind-tunnel studies. Eero Saarinen’s only skyscraper, built for CBS in 1965 and also known as “Black Rock,” under construction in New York City. (Courtesy Eero Saarinen Collection, Manuscripts, and Archives, Yale University Library) The exhibition opened in March, with plans to stay up at least through October (Willis prefers to keep the date flexible), with accompanying lectures and panels to be announced on the museum’s website (skyscraper.org). Though the exhibition’s full textual and graphic content is available online, the physical models alone are worth a trip to the Battery Park City headquarters. Intriguing questions arise from the exhibition without easy answers, setting the table for lively discussion and debate. One is whether the patenting of innovations like Ransome bar and the Système Hennebique incentivized technological progress or hindered useful technology transfer. Willis speculated, “Did the fact that there were inventions and patents mean that competition was discouraged, that the competition was only in the realm of business, rather than advancing the material?” A critical question is whether research into the chemistry of concrete, including MIT’s 2023 report on the self-healing properties of Roman pozzolana and proliferating claims about “green concrete” using alternatives to Portland cement, can lead to new types of the material with improved durability and lower emissions footprints. This exhibition provides a firm foundation in concrete’s fascinating history, opening space for informed speculation about its future. Bill Millard is a regular contributor to AN.
    Like
    Love
    Wow
    Sad
    Angry
    553
    0 Комментарии 0 Поделились
  • FromSoft acknowledges issues with Elden Ring Nightreign matchmaking

    UPDATE 6.07pm: FromSoft has issued a follow up suggestion for players still struggling with matchmaking on Elden Ring Neightreign.
    In a post on X/Twitter, the developer said: "If you have difficulty matchmaking on PS4 & PS5, please check your NAT type. NAT type 3 may affect matchmaking on PSN.
    "Check your NAT type with the following steps: Home > Settings > Network > Connection Status > Check Connection Status.
    "Thank you for your support."
    Original story follows.

    ORIGINAL STORY: If you're jumping into Elden Ring Nightreign this weekend and are struggling to find a Player 2 - and/or a Player 3 - you're not alone.
    In a brief statement posted to official Elden Ring social media accounts about an hour ago, developer FromSoftware is recommending players "restart the matchmaking process" if they're struggling to find a co-op partner.

    To see this content please enable targeting cookies.

    Elden Ring Nightreign For Dummies: Basics For EVERYTHING You Need to Know.Watch on YouTube
    "Nightfarers. If you encounter issues finding other players when launching an expedition in

    To see this content please enable targeting cookies.

    It may not quite fix the issue for all players, however; as one commenter asked in the replies: "How often am I supposed to restart it? Yesterday I spendseveral hours restarting it on PS5 to not even play one match…"
    Finding a co-op partner is pretty important for Nightreign players. As Ed wrote yesterday, he wouldn't recommend Nightreign as a solo game at this point as it's clearly not the intended way to play. However, it seems FromSoftware is aiming to respond to the current backlash to make solo play more feasible, with the studio previously stated it was considering a two-player option post-launch, acknowledging it was "something that was overlooked during development".
    "FromSoftware's multiplayer spin-off is an exhilarating rush and a celebration of the studio's prior achievements Souls veterans will devour," Ed wrote in our Elden Ring Nightreign review.
    #fromsoft #acknowledges #issues #with #elden
    FromSoft acknowledges issues with Elden Ring Nightreign matchmaking
    UPDATE 6.07pm: FromSoft has issued a follow up suggestion for players still struggling with matchmaking on Elden Ring Neightreign. In a post on X/Twitter, the developer said: "If you have difficulty matchmaking on PS4 & PS5, please check your NAT type. NAT type 3 may affect matchmaking on PSN. "Check your NAT type with the following steps: Home > Settings > Network > Connection Status > Check Connection Status. "Thank you for your support." Original story follows. ORIGINAL STORY: If you're jumping into Elden Ring Nightreign this weekend and are struggling to find a Player 2 - and/or a Player 3 - you're not alone. In a brief statement posted to official Elden Ring social media accounts about an hour ago, developer FromSoftware is recommending players "restart the matchmaking process" if they're struggling to find a co-op partner. To see this content please enable targeting cookies. Elden Ring Nightreign For Dummies: Basics For EVERYTHING You Need to Know.Watch on YouTube "Nightfarers. If you encounter issues finding other players when launching an expedition in To see this content please enable targeting cookies. It may not quite fix the issue for all players, however; as one commenter asked in the replies: "How often am I supposed to restart it? Yesterday I spendseveral hours restarting it on PS5 to not even play one match…" Finding a co-op partner is pretty important for Nightreign players. As Ed wrote yesterday, he wouldn't recommend Nightreign as a solo game at this point as it's clearly not the intended way to play. However, it seems FromSoftware is aiming to respond to the current backlash to make solo play more feasible, with the studio previously stated it was considering a two-player option post-launch, acknowledging it was "something that was overlooked during development". "FromSoftware's multiplayer spin-off is an exhilarating rush and a celebration of the studio's prior achievements Souls veterans will devour," Ed wrote in our Elden Ring Nightreign review. #fromsoft #acknowledges #issues #with #elden
    WWW.EUROGAMER.NET
    FromSoft acknowledges issues with Elden Ring Nightreign matchmaking
    UPDATE 6.07pm: FromSoft has issued a follow up suggestion for players still struggling with matchmaking on Elden Ring Neightreign. In a post on X/Twitter, the developer said: "If you have difficulty matchmaking on PS4 & PS5, please check your NAT type. NAT type 3 may affect matchmaking on PSN. "Check your NAT type with the following steps: Home > Settings > Network > Connection Status > Check Connection Status. "Thank you for your support." Original story follows. ORIGINAL STORY: If you're jumping into Elden Ring Nightreign this weekend and are struggling to find a Player 2 - and/or a Player 3 - you're not alone. In a brief statement posted to official Elden Ring social media accounts about an hour ago (Saturday, 31st May), developer FromSoftware is recommending players "restart the matchmaking process" if they're struggling to find a co-op partner. To see this content please enable targeting cookies. Elden Ring Nightreign For Dummies: Basics For EVERYTHING You Need to Know (But Were Afraid to Ask).Watch on YouTube "Nightfarers. If you encounter issues finding other players when launching an expedition in To see this content please enable targeting cookies. It may not quite fix the issue for all players, however; as one commenter asked in the replies: "How often am I supposed to restart it? Yesterday I spend [sic] several hours restarting it on PS5 to not even play one match…" Finding a co-op partner is pretty important for Nightreign players. As Ed wrote yesterday, he wouldn't recommend Nightreign as a solo game at this point as it's clearly not the intended way to play. However, it seems FromSoftware is aiming to respond to the current backlash to make solo play more feasible, with the studio previously stated it was considering a two-player option post-launch, acknowledging it was "something that was overlooked during development". "FromSoftware's multiplayer spin-off is an exhilarating rush and a celebration of the studio's prior achievements Souls veterans will devour," Ed wrote in our Elden Ring Nightreign review.
    0 Комментарии 0 Поделились
  • CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff

    CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff
    "This agreement shows what's possible when workers stand together and refuse the status quo."

    Image credit: Microsoft

    News

    by Vikki Blake
    Contributor

    Published on May 31, 2025

    The Communications Workers of Americasays it has reached a "historic tentative contract agreement" with ZeniMax Media staff at Microsoft.
    In a statement, the union calls the deal a "first for the video game industry", and revealed it had been negotiating for a first contract for "nearly two years".
    "QA workers from across the country continue to lead the charge for industry-wide change," said Page Branson, Senior II QA Tester and ZeniMax Workers United-CWA bargaining committee member. "Going toe-to-toe with one of the largest corporations in the world isn’t a small feat. This is a monumental victory for all current video game workers and for those that come after."

    Xbox currently has more first-party games coming to PlayStation 5 this year than Sony.Watch on YouTube
    The new contract is said to set "new standards for the industry" and includes "substantial across-the-board wage increases as well as new minimum salaries for workers". It also includes protections against arbitrary dismissal, grievance procedures, and a crediting policy that "clearly acknowledges the QA workers' contributions to the video games they help create", as well as a previously announced agreement on how artificial intelligence is introduced and implemented in the workplace.
    "Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what's possible when workers stand together and refuse to accept the status quo," added CWA President Claude Cummings Jr. "Whether it's having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We're proud to support them every step of the way."
    BREAKING: We have reached a historic first tentative contract agreement with Microsoft!

    cwa-union.org/news/release...— CWAMay 30, 2025 at 5:04 PM
    To see this content please enable targeting cookies.

    Members can expect contract explanation meetings over the next few weeks, and a ratification vote is expected by 20th June.
    As game development becomes increasingly insecure all over the world, more and more developers and performers are organising collective bargaining. Following news of the SAG-AFTRA strike last year, Equity stated it stood "in solidarity", but would not be authorising a strike. It did, however, recently call on the games industry to improve conditions for performers, and a protest took place outside BAFTA Games Awards as Equity members held placards reading "Union contracts in gaming now".
    Last month, the US union warned of "alarming loopholes" for "AI abuse" in the latest proposal to end industrial action, while earlier this month, almost 200 Overwatch developers working at Activision Blizzard joined the Communications Workers of Americaunion after the "overwhelming majority" of workers signed up.
    #cwa #negotiates #new #contract #zenimax
    CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff
    CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff "This agreement shows what's possible when workers stand together and refuse the status quo." Image credit: Microsoft News by Vikki Blake Contributor Published on May 31, 2025 The Communications Workers of Americasays it has reached a "historic tentative contract agreement" with ZeniMax Media staff at Microsoft. In a statement, the union calls the deal a "first for the video game industry", and revealed it had been negotiating for a first contract for "nearly two years". "QA workers from across the country continue to lead the charge for industry-wide change," said Page Branson, Senior II QA Tester and ZeniMax Workers United-CWA bargaining committee member. "Going toe-to-toe with one of the largest corporations in the world isn’t a small feat. This is a monumental victory for all current video game workers and for those that come after." Xbox currently has more first-party games coming to PlayStation 5 this year than Sony.Watch on YouTube The new contract is said to set "new standards for the industry" and includes "substantial across-the-board wage increases as well as new minimum salaries for workers". It also includes protections against arbitrary dismissal, grievance procedures, and a crediting policy that "clearly acknowledges the QA workers' contributions to the video games they help create", as well as a previously announced agreement on how artificial intelligence is introduced and implemented in the workplace. "Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what's possible when workers stand together and refuse to accept the status quo," added CWA President Claude Cummings Jr. "Whether it's having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We're proud to support them every step of the way." BREAKING: We have reached a historic first tentative contract agreement with Microsoft! cwa-union.org/news/release...— CWAMay 30, 2025 at 5:04 PM To see this content please enable targeting cookies. Members can expect contract explanation meetings over the next few weeks, and a ratification vote is expected by 20th June. As game development becomes increasingly insecure all over the world, more and more developers and performers are organising collective bargaining. Following news of the SAG-AFTRA strike last year, Equity stated it stood "in solidarity", but would not be authorising a strike. It did, however, recently call on the games industry to improve conditions for performers, and a protest took place outside BAFTA Games Awards as Equity members held placards reading "Union contracts in gaming now". Last month, the US union warned of "alarming loopholes" for "AI abuse" in the latest proposal to end industrial action, while earlier this month, almost 200 Overwatch developers working at Activision Blizzard joined the Communications Workers of Americaunion after the "overwhelming majority" of workers signed up. #cwa #negotiates #new #contract #zenimax
    WWW.EUROGAMER.NET
    CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff
    CWA negotiates new contract for ZeniMax including "substantial" wage increases and a credits policy for QA staff "This agreement shows what's possible when workers stand together and refuse the status quo." Image credit: Microsoft News by Vikki Blake Contributor Published on May 31, 2025 The Communications Workers of America (CWA) says it has reached a "historic tentative contract agreement" with ZeniMax Media staff at Microsoft. In a statement, the union calls the deal a "first for the video game industry", and revealed it had been negotiating for a first contract for "nearly two years". "QA workers from across the country continue to lead the charge for industry-wide change," said Page Branson, Senior II QA Tester and ZeniMax Workers United-CWA bargaining committee member. "Going toe-to-toe with one of the largest corporations in the world isn’t a small feat. This is a monumental victory for all current video game workers and for those that come after." Xbox currently has more first-party games coming to PlayStation 5 this year than Sony.Watch on YouTube The new contract is said to set "new standards for the industry" and includes "substantial across-the-board wage increases as well as new minimum salaries for workers". It also includes protections against arbitrary dismissal, grievance procedures, and a crediting policy that "clearly acknowledges the QA workers' contributions to the video games they help create", as well as a previously announced agreement on how artificial intelligence is introduced and implemented in the workplace. "Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what's possible when workers stand together and refuse to accept the status quo," added CWA President Claude Cummings Jr. "Whether it's having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We're proud to support them every step of the way." BREAKING: We have reached a historic first tentative contract agreement with Microsoft! cwa-union.org/news/release...[image or embed]— CWA (@cwaunion.bsky.social) May 30, 2025 at 5:04 PM To see this content please enable targeting cookies. Members can expect contract explanation meetings over the next few weeks, and a ratification vote is expected by 20th June. As game development becomes increasingly insecure all over the world, more and more developers and performers are organising collective bargaining. Following news of the SAG-AFTRA strike last year, Equity stated it stood "in solidarity", but would not be authorising a strike. It did, however, recently call on the games industry to improve conditions for performers, and a protest took place outside BAFTA Games Awards as Equity members held placards reading "Union contracts in gaming now". Last month, the US union warned of "alarming loopholes" for "AI abuse" in the latest proposal to end industrial action, while earlier this month, almost 200 Overwatch developers working at Activision Blizzard joined the Communications Workers of America (CWA) union after the "overwhelming majority" of workers signed up.
    0 Комментарии 0 Поделились
  • Marketing in an age of economic uncertainty

    Let’s get this out of the way: We constantly live in uncertain times. Periods of tranquility are actually an aberration, if not an illusion.

    The relationship between marketing budgets and economic volatility has always been complex. What we’re witnessing isn’t just the usual ebb and flow of consumer confidence or standard market corrections. It’s an unprecedented convergence of tariff confusion, inflationary pressures, supply chain disruptions, and debt refinancing challenges.

    As I talk to CMOs and marketing leaders across industries, one word keeps surfacing: paralysis.

    Decision makers find themselves frozen, unsure whether to commit to long-term advertising contracts, unable to accurately forecast costs, and struggling to craft messaging that resonates in a consumer landscape where spending power is increasingly unpredictable.

    The historical perspective: Who thrives in downturns?

    When I look back at previous economic contractions—particularly 2008 and 2020—a clear pattern emerges that separates survivors from thrivers.

    In 2008, as financial markets collapsed, brands like Amazon, Netflix, and Hyundai didn’t retreat. They advanced.

    Netflix invested heavily in its streaming service during the financial crisis, laying the groundwork for its eventual dominance. Hyundai introduced its ground-breaking “Assurance Program,” allowing customers to return newly purchased vehicles if they lost their jobs—a true masterstroke that increased Hyundai’s market share while competitors were seeing double-digit sales declines.

    The 2020 pandemic presented similar divergent paths. While many brands slashed marketing budgets in panic, companies like Zoom and DoorDash significantly increased their marketing investments, recognizing the unique moment to capture market share when consumers were rapidly forming new habits.

    The common thread? These companies didn’t view marketing as a discretionary expense to be cut during uncertainty. They saw it as a strategic lever, one that should be pulled harder during hard times.

    4 strategic approaches for the uncertainty-conscious marketer

    Here’s what the most forward-thinking marketers are doing now to navigate the choppy waters ahead:

    They’re embracing flexibility in all media contracts. The days of rigid, long-term commitments are giving way to more agile arrangements that allow for budget reallocation as economic conditions shift. This means negotiating pause clauses, shorter commitment windows, and performance-based terms that protect all contracted parties.

    Budgets are shifting toward measurable, adaptable channels. While social media and traditional media face the deepest anticipated cuts, digital advertising continues to gain market share despite economic concerns. Digital is projected to encompass up to 79% of total ad spend by 2030, up from its current 67%.

    Message content is being entirely rethought. In the face of economic anxiety, brands need messaging that acknowledges reality while providing genuine value. We’re seeing this play out in automotive advertising, where some manufacturers are emphasizing their American manufacturing credentials. Ford’s “From America, For America” campaign represents a strategic positioning that resonates in an era of tariff concerns. As Hyundai, in 2008, these advertisers are using the moment to emphasize their particular brand’s appeal.

    AI is being leveraged not just for cost cutting but for scenario planning. The most sophisticated marketing teams are using AI to model multiple economic outcomes and prepare messaging, budget allocations, and channel strategies for each scenario.

    The creative reset: How agencies have already adapted

    It’s worth noting that the industry isn’t starting from scratch in facing these challenges. Client behavior on creative development has undergone a dramatic transformation over the past several years. The best independent agencies have already restructured their operations in response.

    Gone are the days of lengthy creative development cycles and rigid campaign frameworks. Anticipating these changes years ago, independent shops have largely embraced agile methodologies that align perfectly with today’s economic realities.

    In many ways, the independent agency sector has already prepared for exactly this kind of destabilizing environment. They’ve built their businesses around speed and adaptability rather than scale and standardization. As such, they’re uniquely positioned to help steer brands through bumps ahead without sacrificing creative impact or market presence.

    Brand versus performance in uncertain times

    Perhaps the most critical strategic question facing marketers is how to balance brand building against performance marketing when budgets contract.

    Historical data consistently shows that brands maintaining or increasing their share of voice during downturns emerge in stronger positions when markets recover. Yet short-term revenue pressures make performance marketing irresistibly tempting when every dollar must be justified.

    The smart play here isn’t choosing one over the other but reimagining how all of these factors work together. Performance marketing can be designed to build brand equity simultaneously. Brand marketing can incorporate more direct response elements. The artificial wall between these disciplines must come down to survive economic headwinds.

    Opportunity within adversity

    The brands that will emerge strongest from this period of uncertainty won’t be those with the largest budgets, but those with the clearest strategic vision, the most agile execution, and the courage to maintain presence when competitors retreat.

    Economic uncertainty doesn’t change the fundamental truth that share of voice leads to share of market. It simply raises the stakes and rewards those who can maintain their voice when others fall silent.

    Looking at the latter half of 2025, the marketing leaders who view this period not as a time to hide but as a rare opportunity to stand out will be the ones writing the success stories we’ll be studying for years to come.

    Tim Ringel is global CEO of Meet The People.
    #marketing #age #economic #uncertainty
    Marketing in an age of economic uncertainty
    Let’s get this out of the way: We constantly live in uncertain times. Periods of tranquility are actually an aberration, if not an illusion. The relationship between marketing budgets and economic volatility has always been complex. What we’re witnessing isn’t just the usual ebb and flow of consumer confidence or standard market corrections. It’s an unprecedented convergence of tariff confusion, inflationary pressures, supply chain disruptions, and debt refinancing challenges. As I talk to CMOs and marketing leaders across industries, one word keeps surfacing: paralysis. Decision makers find themselves frozen, unsure whether to commit to long-term advertising contracts, unable to accurately forecast costs, and struggling to craft messaging that resonates in a consumer landscape where spending power is increasingly unpredictable. The historical perspective: Who thrives in downturns? When I look back at previous economic contractions—particularly 2008 and 2020—a clear pattern emerges that separates survivors from thrivers. In 2008, as financial markets collapsed, brands like Amazon, Netflix, and Hyundai didn’t retreat. They advanced. Netflix invested heavily in its streaming service during the financial crisis, laying the groundwork for its eventual dominance. Hyundai introduced its ground-breaking “Assurance Program,” allowing customers to return newly purchased vehicles if they lost their jobs—a true masterstroke that increased Hyundai’s market share while competitors were seeing double-digit sales declines. The 2020 pandemic presented similar divergent paths. While many brands slashed marketing budgets in panic, companies like Zoom and DoorDash significantly increased their marketing investments, recognizing the unique moment to capture market share when consumers were rapidly forming new habits. The common thread? These companies didn’t view marketing as a discretionary expense to be cut during uncertainty. They saw it as a strategic lever, one that should be pulled harder during hard times. 4 strategic approaches for the uncertainty-conscious marketer Here’s what the most forward-thinking marketers are doing now to navigate the choppy waters ahead: They’re embracing flexibility in all media contracts. The days of rigid, long-term commitments are giving way to more agile arrangements that allow for budget reallocation as economic conditions shift. This means negotiating pause clauses, shorter commitment windows, and performance-based terms that protect all contracted parties. Budgets are shifting toward measurable, adaptable channels. While social media and traditional media face the deepest anticipated cuts, digital advertising continues to gain market share despite economic concerns. Digital is projected to encompass up to 79% of total ad spend by 2030, up from its current 67%. Message content is being entirely rethought. In the face of economic anxiety, brands need messaging that acknowledges reality while providing genuine value. We’re seeing this play out in automotive advertising, where some manufacturers are emphasizing their American manufacturing credentials. Ford’s “From America, For America” campaign represents a strategic positioning that resonates in an era of tariff concerns. As Hyundai, in 2008, these advertisers are using the moment to emphasize their particular brand’s appeal. AI is being leveraged not just for cost cutting but for scenario planning. The most sophisticated marketing teams are using AI to model multiple economic outcomes and prepare messaging, budget allocations, and channel strategies for each scenario. The creative reset: How agencies have already adapted It’s worth noting that the industry isn’t starting from scratch in facing these challenges. Client behavior on creative development has undergone a dramatic transformation over the past several years. The best independent agencies have already restructured their operations in response. Gone are the days of lengthy creative development cycles and rigid campaign frameworks. Anticipating these changes years ago, independent shops have largely embraced agile methodologies that align perfectly with today’s economic realities. In many ways, the independent agency sector has already prepared for exactly this kind of destabilizing environment. They’ve built their businesses around speed and adaptability rather than scale and standardization. As such, they’re uniquely positioned to help steer brands through bumps ahead without sacrificing creative impact or market presence. Brand versus performance in uncertain times Perhaps the most critical strategic question facing marketers is how to balance brand building against performance marketing when budgets contract. Historical data consistently shows that brands maintaining or increasing their share of voice during downturns emerge in stronger positions when markets recover. Yet short-term revenue pressures make performance marketing irresistibly tempting when every dollar must be justified. The smart play here isn’t choosing one over the other but reimagining how all of these factors work together. Performance marketing can be designed to build brand equity simultaneously. Brand marketing can incorporate more direct response elements. The artificial wall between these disciplines must come down to survive economic headwinds. Opportunity within adversity The brands that will emerge strongest from this period of uncertainty won’t be those with the largest budgets, but those with the clearest strategic vision, the most agile execution, and the courage to maintain presence when competitors retreat. Economic uncertainty doesn’t change the fundamental truth that share of voice leads to share of market. It simply raises the stakes and rewards those who can maintain their voice when others fall silent. Looking at the latter half of 2025, the marketing leaders who view this period not as a time to hide but as a rare opportunity to stand out will be the ones writing the success stories we’ll be studying for years to come. Tim Ringel is global CEO of Meet The People. #marketing #age #economic #uncertainty
    WWW.FASTCOMPANY.COM
    Marketing in an age of economic uncertainty
    Let’s get this out of the way: We constantly live in uncertain times. Periods of tranquility are actually an aberration, if not an illusion. The relationship between marketing budgets and economic volatility has always been complex. What we’re witnessing isn’t just the usual ebb and flow of consumer confidence or standard market corrections. It’s an unprecedented convergence of tariff confusion, inflationary pressures, supply chain disruptions, and debt refinancing challenges. As I talk to CMOs and marketing leaders across industries, one word keeps surfacing: paralysis. Decision makers find themselves frozen, unsure whether to commit to long-term advertising contracts, unable to accurately forecast costs, and struggling to craft messaging that resonates in a consumer landscape where spending power is increasingly unpredictable. The historical perspective: Who thrives in downturns? When I look back at previous economic contractions—particularly 2008 and 2020—a clear pattern emerges that separates survivors from thrivers. In 2008, as financial markets collapsed, brands like Amazon, Netflix, and Hyundai didn’t retreat. They advanced. Netflix invested heavily in its streaming service during the financial crisis, laying the groundwork for its eventual dominance. Hyundai introduced its ground-breaking “Assurance Program,” allowing customers to return newly purchased vehicles if they lost their jobs—a true masterstroke that increased Hyundai’s market share while competitors were seeing double-digit sales declines. The 2020 pandemic presented similar divergent paths. While many brands slashed marketing budgets in panic, companies like Zoom and DoorDash significantly increased their marketing investments, recognizing the unique moment to capture market share when consumers were rapidly forming new habits. The common thread? These companies didn’t view marketing as a discretionary expense to be cut during uncertainty. They saw it as a strategic lever, one that should be pulled harder during hard times. 4 strategic approaches for the uncertainty-conscious marketer Here’s what the most forward-thinking marketers are doing now to navigate the choppy waters ahead: They’re embracing flexibility in all media contracts. The days of rigid, long-term commitments are giving way to more agile arrangements that allow for budget reallocation as economic conditions shift. This means negotiating pause clauses, shorter commitment windows, and performance-based terms that protect all contracted parties. Budgets are shifting toward measurable, adaptable channels. While social media and traditional media face the deepest anticipated cuts (41% and 43% respectively), digital advertising continues to gain market share despite economic concerns. Digital is projected to encompass up to 79% of total ad spend by 2030, up from its current 67%. Message content is being entirely rethought. In the face of economic anxiety, brands need messaging that acknowledges reality while providing genuine value. We’re seeing this play out in automotive advertising, where some manufacturers are emphasizing their American manufacturing credentials. Ford’s “From America, For America” campaign represents a strategic positioning that resonates in an era of tariff concerns. As Hyundai, in 2008, these advertisers are using the moment to emphasize their particular brand’s appeal. AI is being leveraged not just for cost cutting but for scenario planning. The most sophisticated marketing teams are using AI to model multiple economic outcomes and prepare messaging, budget allocations, and channel strategies for each scenario. The creative reset: How agencies have already adapted It’s worth noting that the industry isn’t starting from scratch in facing these challenges. Client behavior on creative development has undergone a dramatic transformation over the past several years. The best independent agencies have already restructured their operations in response. Gone are the days of lengthy creative development cycles and rigid campaign frameworks. Anticipating these changes years ago, independent shops have largely embraced agile methodologies that align perfectly with today’s economic realities. In many ways, the independent agency sector has already prepared for exactly this kind of destabilizing environment. They’ve built their businesses around speed and adaptability rather than scale and standardization. As such, they’re uniquely positioned to help steer brands through bumps ahead without sacrificing creative impact or market presence. Brand versus performance in uncertain times Perhaps the most critical strategic question facing marketers is how to balance brand building against performance marketing when budgets contract. Historical data consistently shows that brands maintaining or increasing their share of voice during downturns emerge in stronger positions when markets recover. Yet short-term revenue pressures make performance marketing irresistibly tempting when every dollar must be justified. The smart play here isn’t choosing one over the other but reimagining how all of these factors work together. Performance marketing can be designed to build brand equity simultaneously. Brand marketing can incorporate more direct response elements. The artificial wall between these disciplines must come down to survive economic headwinds. Opportunity within adversity The brands that will emerge strongest from this period of uncertainty won’t be those with the largest budgets, but those with the clearest strategic vision, the most agile execution, and the courage to maintain presence when competitors retreat. Economic uncertainty doesn’t change the fundamental truth that share of voice leads to share of market. It simply raises the stakes and rewards those who can maintain their voice when others fall silent. Looking at the latter half of 2025, the marketing leaders who view this period not as a time to hide but as a rare opportunity to stand out will be the ones writing the success stories we’ll be studying for years to come. Tim Ringel is global CEO of Meet The People.
    0 Комментарии 0 Поделились
  • Editorial: Gentle Density in Action

    Gerrard Healthy Housing replaces a single-family home in a walkable Toronto neighbourhood with 10 rental housing units. Photo by Alexandra Berceneau
    Gerrard Healthy Housing, at Gerrard and Main in Toronto, delivers exactly the kind of “gentle density” that has been much discussed and desired in the city. The eight-unit walk-up rental building with two laneway houses replaces a single-family home, while carefully integrating with its walkable neighbourhood.
    But achieving this outcome was no easy matter. To streamline approvals, TMU professor Cheryl Atkinson, of Atkinson Architect, aimed to design with no variances. “Everything’s to the minimum in terms of distance between the attached four-plexes and the laneway units,” says Rolf Paloheimo, of P&R Development, who also acted as project manager. “We built to the maximum height within 100 millimetres.”
    Atkinson had designed a panellized, net-zero missing middle housing unit exhibited at DX’s EDIT festival as part of a TMU research project; Paloheimo was the client and developer behind the 1996 CMHC Riverdale Healthy House, a model sustainable development designed by Martin Liefhebber. For Gerrard Healthy Housing, they set out to create as close to Passive House as possible, specifying all-electric heat pumps and ERVs, using wood framing, and deploying blown-in-cellulose insulation to achieve
    a quiet and airtight R45-R65 envelope—although stopping short of installing triple-glazed windows.
    “We wanted to make it reproducible and affordable,” says Paloheimo. “Part of my argument for doing this scale of development is that if you stay in part 9, the construction is a lot lighter, the consultant load is lighter. You’re stuck with higher land costs, but costs are quite a bit lower to build,” he adds. The construction costs for the project tallied up to per square foot, and the all-in cost for the project was per square foot—about half the square-foot cost of condo construction.
    Atkinson’s sensitive design provides natural light on three sides of all but two units, ample cross-ventilation and closet space, and office nooks that overlook entry stairs—as well as façades detailed to fit in with the scale of neighbourhood. Details like bespoke mailboxes add polish to the composition.
    The financial success of the project depended largely on government incentives for housing: just before construction started, the province waived HST on rental developments, and the City exempted four-plexes from development charges. 
    Paloheimo’s project management of the endeavour ensured the project stayed on track. He kept a close eye on the prices tendered by the general contractor, and ended up finding some of the trades on his own—developing such a good rapport that he bought them cakes from a nearby patisserie at the end of the project. Both Atkinson and Paloheimo also befriended the neighbours, one of whom provided temporary power from her home when the hydro connection was delayed. 
    Can this kind of success be replicated at scale? Paloheimo is cautiously hopeful, and plans to continue with small-scale development projects in Toronto. But he acknowledges that it’s not an endeavour for the faint of heart. “You have a house that used to be just four walls and a roof,” he says. “And then we’re gradually adding complexity. If you’re doing sustainable housing, it’s got to have a certain R-value, a certain airtightness. So it creates headwinds if you want to make affordable housing.”
    The bigger problem, he says, is the financialization of housing—unlike a car, which you expect to lose value and cost money each year, we expect our homes to continually increase in value. “If we could get away from that, we could focus on what’s really important about housing: which is comfort, space, light, services.”

    As appeared in the June 2025 issue of Canadian Architect magazine
    The post Editorial: Gentle Density in Action appeared first on Canadian Architect.
    #editorial #gentle #density #action
    Editorial: Gentle Density in Action
    Gerrard Healthy Housing replaces a single-family home in a walkable Toronto neighbourhood with 10 rental housing units. Photo by Alexandra Berceneau Gerrard Healthy Housing, at Gerrard and Main in Toronto, delivers exactly the kind of “gentle density” that has been much discussed and desired in the city. The eight-unit walk-up rental building with two laneway houses replaces a single-family home, while carefully integrating with its walkable neighbourhood. But achieving this outcome was no easy matter. To streamline approvals, TMU professor Cheryl Atkinson, of Atkinson Architect, aimed to design with no variances. “Everything’s to the minimum in terms of distance between the attached four-plexes and the laneway units,” says Rolf Paloheimo, of P&R Development, who also acted as project manager. “We built to the maximum height within 100 millimetres.” Atkinson had designed a panellized, net-zero missing middle housing unit exhibited at DX’s EDIT festival as part of a TMU research project; Paloheimo was the client and developer behind the 1996 CMHC Riverdale Healthy House, a model sustainable development designed by Martin Liefhebber. For Gerrard Healthy Housing, they set out to create as close to Passive House as possible, specifying all-electric heat pumps and ERVs, using wood framing, and deploying blown-in-cellulose insulation to achieve a quiet and airtight R45-R65 envelope—although stopping short of installing triple-glazed windows. “We wanted to make it reproducible and affordable,” says Paloheimo. “Part of my argument for doing this scale of development is that if you stay in part 9, the construction is a lot lighter, the consultant load is lighter. You’re stuck with higher land costs, but costs are quite a bit lower to build,” he adds. The construction costs for the project tallied up to per square foot, and the all-in cost for the project was per square foot—about half the square-foot cost of condo construction. Atkinson’s sensitive design provides natural light on three sides of all but two units, ample cross-ventilation and closet space, and office nooks that overlook entry stairs—as well as façades detailed to fit in with the scale of neighbourhood. Details like bespoke mailboxes add polish to the composition. The financial success of the project depended largely on government incentives for housing: just before construction started, the province waived HST on rental developments, and the City exempted four-plexes from development charges.  Paloheimo’s project management of the endeavour ensured the project stayed on track. He kept a close eye on the prices tendered by the general contractor, and ended up finding some of the trades on his own—developing such a good rapport that he bought them cakes from a nearby patisserie at the end of the project. Both Atkinson and Paloheimo also befriended the neighbours, one of whom provided temporary power from her home when the hydro connection was delayed.  Can this kind of success be replicated at scale? Paloheimo is cautiously hopeful, and plans to continue with small-scale development projects in Toronto. But he acknowledges that it’s not an endeavour for the faint of heart. “You have a house that used to be just four walls and a roof,” he says. “And then we’re gradually adding complexity. If you’re doing sustainable housing, it’s got to have a certain R-value, a certain airtightness. So it creates headwinds if you want to make affordable housing.” The bigger problem, he says, is the financialization of housing—unlike a car, which you expect to lose value and cost money each year, we expect our homes to continually increase in value. “If we could get away from that, we could focus on what’s really important about housing: which is comfort, space, light, services.” As appeared in the June 2025 issue of Canadian Architect magazine The post Editorial: Gentle Density in Action appeared first on Canadian Architect. #editorial #gentle #density #action
    WWW.CANADIANARCHITECT.COM
    Editorial: Gentle Density in Action
    Gerrard Healthy Housing replaces a single-family home in a walkable Toronto neighbourhood with 10 rental housing units. Photo by Alexandra Berceneau Gerrard Healthy Housing, at Gerrard and Main in Toronto, delivers exactly the kind of “gentle density” that has been much discussed and desired in the city. The eight-unit walk-up rental building with two laneway houses replaces a single-family home, while carefully integrating with its walkable neighbourhood. But achieving this outcome was no easy matter. To streamline approvals, TMU professor Cheryl Atkinson, of Atkinson Architect, aimed to design with no variances. “Everything’s to the minimum in terms of distance between the attached four-plexes and the laneway units,” says Rolf Paloheimo, of P&R Development, who also acted as project manager. “We built to the maximum height within 100 millimetres.” Atkinson had designed a panellized, net-zero missing middle housing unit exhibited at DX’s EDIT festival as part of a TMU research project; Paloheimo was the client and developer behind the 1996 CMHC Riverdale Healthy House, a model sustainable development designed by Martin Liefhebber. For Gerrard Healthy Housing, they set out to create as close to Passive House as possible, specifying all-electric heat pumps and ERVs, using wood framing, and deploying blown-in-cellulose insulation to achieve a quiet and airtight R45-R65 envelope—although stopping short of installing triple-glazed windows. “We wanted to make it reproducible and affordable,” says Paloheimo. “Part of my argument for doing this scale of development is that if you stay in part 9 [of the building code], the construction is a lot lighter, the consultant load is lighter. You’re stuck with higher land costs, but costs are quite a bit lower to build,” he adds. The construction costs for the project tallied up to $300 per square foot, and the all-in cost for the project was $650 per square foot—about half the square-foot cost of condo construction. Atkinson’s sensitive design provides natural light on three sides of all but two units, ample cross-ventilation and closet space, and office nooks that overlook entry stairs—as well as façades detailed to fit in with the scale of neighbourhood. Details like bespoke mailboxes add polish to the composition. The financial success of the project depended largely on government incentives for housing: just before construction started, the province waived HST on rental developments, and the City exempted four-plexes from development charges.  Paloheimo’s project management of the endeavour ensured the project stayed on track. He kept a close eye on the prices tendered by the general contractor, and ended up finding some of the trades on his own—developing such a good rapport that he bought them cakes from a nearby patisserie at the end of the project. Both Atkinson and Paloheimo also befriended the neighbours, one of whom provided temporary power from her home when the hydro connection was delayed.  Can this kind of success be replicated at scale? Paloheimo is cautiously hopeful, and plans to continue with small-scale development projects in Toronto. But he acknowledges that it’s not an endeavour for the faint of heart. “You have a house that used to be just four walls and a roof,” he says. “And then we’re gradually adding complexity. If you’re doing sustainable housing, it’s got to have a certain R-value, a certain airtightness. So it creates headwinds if you want to make affordable housing.” The bigger problem, he says, is the financialization of housing—unlike a car, which you expect to lose value and cost money each year, we expect our homes to continually increase in value. “If we could get away from that, we could focus on what’s really important about housing: which is comfort, space, light, services.” As appeared in the June 2025 issue of Canadian Architect magazine The post Editorial: Gentle Density in Action appeared first on Canadian Architect.
    0 Комментарии 0 Поделились
  • Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply

    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties.

    For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry.

    That’s because of a century-old law called the rule of capture.

    The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck.

    To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result.

    Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited.

    “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.”

    A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district.Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply.

    Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic billion deal on water projects.

    High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock.But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms.

    “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.”

    State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment.

    Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law.

    “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.”

    Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle.

    By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070.

    Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins.

    Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water.

    In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand.

    “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.”Groundwater districts and the law

    The High Plains Water District was the first groundwater district created in Texas.

    Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution.

    If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner.

    The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water.

    In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation.

    The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district.“You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.”

    The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so.

    A well spacing guide is seen in Nathaniel Bibbs’ office.“The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.”

    The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law.

    It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit.

    “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm.

    The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses.

    Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority million.

    “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.”

    An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done.

    While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners.

    “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.”

    Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said.

    “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said.

    Can’t reverse the past

    Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious.

    The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest.

    Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall.“We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.”

    The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future.

    At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted.

    A furrow irrigation is seen at Walt Hagood’s cotton farm.Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat.

    According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.”

    The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region.

    Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability.

    The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water.

    “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?”

    Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive.

    “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.”

    Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said.

    “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.”

    Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply.What can be done

    The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method.

    But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism.

    In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point.

    Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base.

    “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.”

    There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer.

    If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater.

    Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance.

    “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.”

    Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas.

    There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too?

    Hagood said there would have to be limits.

    Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

    This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.
    #texas #headed #droughtbut #lawmakers #wont
    Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply
    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties. For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry. That’s because of a century-old law called the rule of capture. The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck. To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result. Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited. “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.” A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district.Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply. Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic billion deal on water projects. High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock.But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms. “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.” State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment. Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law. “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.” Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle. By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070. Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins. Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water. In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand. “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.”Groundwater districts and the law The High Plains Water District was the first groundwater district created in Texas. Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution. If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner. The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water. In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation. The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district.“You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.” The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so. A well spacing guide is seen in Nathaniel Bibbs’ office.“The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.” The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law. It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit. “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm. The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses. Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority million. “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.” An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done. While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners. “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.” Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said. “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said. Can’t reverse the past Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious. The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest. Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall.“We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.” The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future. At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted. A furrow irrigation is seen at Walt Hagood’s cotton farm.Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat. According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.” The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region. Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability. The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water. “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?” Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive. “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.” Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said. “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.” Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply.What can be done The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method. But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism. In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point. Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base. “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.” There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer. If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater. Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance. “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.” Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas. There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too? Hagood said there would have to be limits. Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here. This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org. #texas #headed #droughtbut #lawmakers #wont
    WWW.FASTCOMPANY.COM
    Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply
    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties. For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry. That’s because of a century-old law called the rule of capture. The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck. To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result. Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited. “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.” A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district. [Photo: Annie Rice for The Texas Tribune] Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply. Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic $20 billion deal on water projects. High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock. [Photo: Annie Rice for The Texas Tribune] But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms. “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.” State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment. Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law. “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.” Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle. By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070. Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins. Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water. In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand. “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.” [Image: Yuriko Schumacher/The Texas Tribune] Groundwater districts and the law The High Plains Water District was the first groundwater district created in Texas. Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution. If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner. The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water. In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation. The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district. [Photo: Annie Rice for The Texas Tribune] “You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.” The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so. A well spacing guide is seen in Nathaniel Bibbs’ office. [Photo: Annie Rice for The Texas Tribune] “The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.” The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law. It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit. “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm. The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses. Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority $4.5 million. “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.” An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done. While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners. “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.” Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said. “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said. Can’t reverse the past Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious. The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest. Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall. [Photo: Annie Rice for The Texas Tribune] “We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.” The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future. At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted. A furrow irrigation is seen at Walt Hagood’s cotton farm. [Photo: Annie Rice for The Texas Tribune] Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat. According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.” The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region. Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability. The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water. “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?” Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive. “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.” Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said. “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.” Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. [Photo: Annie Rice for The Texas Tribune] What can be done The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method. But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism. In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point. Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base. “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.” There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer. If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater. Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance. “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.” Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas. There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too? Hagood said there would have to be limits. Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here. This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.
    0 Комментарии 0 Поделились
  • Huawei Supernode 384 disrupts Nvidia’s AI market hold

    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #huawei #supernode #disrupts #nvidias #market
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #huawei #supernode #disrupts #nvidias #market
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.)Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.(Image from Pixabay)See also: Oracle plans $40B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Комментарии 0 Поделились
  • Dutch businesses lag behind in cyber resilience as threats escalate

    The Netherlands is facing a growing cyber security crisis, with a staggering 66% of Dutch businesses lacking adequate cyber resilience, according to academic research.  
    As geopolitical tensions rise and digital threats escalate, Rick van der Kleij, a psychologist and professor in Cyber Resilient Organisations at Avans University of Applied Sciences, who also conducts research at TNO, says that traditional approaches have failed and a paradigm shift is urgently needed. 
    Van der Kleij suggests that cyber security provides the illusion of safety rather than actual protection for many Dutch organisations. His stark assessment is that the Netherlands’ traditional approach to cyber risk is fundamentally broken. 
    “We need to stop thinking in terms of cyber security. It’s a model that has demonstrably failed,” he says. “Despite years of investment in cyber security measures, the frequency and impact of incidents continue to increase rapidly across Dutch businesses.” 
    This reflects the central argument of his recent inaugural lecture “Now that security is no more”, where he called for a paradigm shift in how Dutch organisations approach cyber risks. 

    Van der Kleij describes “the great digital dilemma” of balancing openness and security in a country with one of Europe’s most advanced digital infrastructures. “How can entrepreneurs remain open and connected without having to completely lock down their businesses?” he asks. 
    The statistics are stark. Van der Kleij’s study found that 66% of Dutch businesses are inadequately prepared for cyber threats. Recent ABN Amro research confirms the crisis: one in five businesses suffered cyber crime damage last year, rising to nearly 30% among large companies. For the first time, SMEsare more frequently targeted than large corporations, marking a significant shift in cyber criminal strategy. 
    Despite the numbers, a perception gap persists. Van der Kleij identifies ‘the overconfident’ – Dutch businesses believing their cyber security is adequate when it isn’t. While SME attack rates soar, their risk perception remains static, whereas large organisations show marked awareness increases. This creates a “waterbed effect” – as large companies strengthen defences, cyber criminals shift to less-prepared SMEs which are paradoxically reducing cyber security investments. 

    Van der Kleij emphasises a crucial distinction: while cyber security focuses on preventing incidents, cyber resilience acknowledges that incidents will happen. “It’s about having the capacity to react appropriately, recover from incidents, and learn from what went wrong to emerge stronger,” he says. 
    This requires four capabilities – prepare, respond, recover and adapt – yet most Dutch organisations focus only on preparation. The ABN Amro findings confirm this: many SMEs have firewalls but lack intrusion detection or incident response plans. Large companies take a more balanced approach, combining technology with training, response capabilities and insurance. 
    Uber’s experience illustrates the weakness of purely technical approaches. After a 2016 hack, they implemented two-factor authentication – yet were hacked again in 2022 by an 18-year-old using WhatsApp social engineering.
    “This shows that investing only in technology without addressing human factors creates fundamental weakness, which is particularly relevant for Dutch businesses that prioritise technological solutions,” van der Kleij adds. 

    Van der Kleij challenges the persistent myth that humans are cyber security’s weakest link. “People are often blamed when things go wrong, but the actual vulnerabilities typically lie elsewhere in the system, often in the design itself,” he says. 
    The misdirection is reflected in spending: 85% of cyber security investments go toward technology, 14% toward processes and just 1% toward the human component. Yet the ABN Amro research shows phishing – which succeeds through psychological manipulation rather than sophisticated technology – affects 71% of Dutch businesses. 
    “We’ve known for decades that people aren’t equipped to remember complex passwords across dozens of accounts, yet we continue demanding this and then express surprise when they create workarounds,” van der Kleij says.
    “Rather than blaming users, we should design systems that make secure behaviour easier. In the Netherlands, we need more human awareness in security teams, not more security awareness training for end users.” 

    Why do so many Dutch SMEs fail to invest in cyber resilience despite evident risks? Van der Kleij believes it’s about behaviour, not business size. “It’s not primarily about size or industry – it’s about behaviour and beliefs,” he says. 
    Common limiting beliefs among Dutch entrepreneurs include “I’m too small to be a target” or “I don’t have confidential information”. Remarkably, even suffering a cyber attack doesn’t change this mindset. “Studies show that when businesses are hacked, it doesn’t automatically lead them to better secure their operations afterward,” van der Kleij says. 
    The challenge is reaching those who need help most. “We have vouchers, we have arrangements where entrepreneurs can get help at a significantly reduced fee from cyber security professionals, but uptake remains negligible,” van der Kleij says. “It’s always the same parties who come to the government’s door – the large companies who are already mature. The small ones, we just can’t seem to reach them.” 
    Van der Kleij sees “relational capital” – resources generated through partnerships – as key to enhancing Dutch cyber resilience. “You can become more cyber resilient by establishing partnerships,” he says, pointing to government-encouraged initiatives like Information Sharing and Analysis Centers.  
    The ABN Amro research reveals why collaboration matters: 39% of large companies experienced cyber incidents originating with suppliers or partners, compared with 25% of smaller firms. This supply chain vulnerability drives major Dutch organisations to demand higher standards from partners through initiatives such as Big Helps Small. 
    European regulations reinforce this trend. The new NIS2 directive will expand coverage from hundreds to several thousand Dutch companies, yet only 11% have adequately prepared. Among SMEs, approximately half have done little preparation – despite Dutch police warnings about increasingly frequent ransomware attacks where criminals threaten to release stolen data publicly. 
    Van der Kleij’s current research at Avans University focuses on identifying barriers to cyber resilience investment through focus groups with Dutch entrepreneurs. “When we understand these barriers – which are more likely motivational than knowledge-related – we can design targeted interventions,” he says. 
    Van der Kleij’s message is stark: “The question isn’t whether your organisation will face a cyber incident, but when – and how effectively you’ll respond. Cyber resilience encompasses cyber security while adding crucial capabilities for response, recovery and adaptation. It’s time for a new paradigm in the Netherlands.” 

    about Dutch cyber security
    #dutch #businesses #lag #behind #cyber
    Dutch businesses lag behind in cyber resilience as threats escalate
    The Netherlands is facing a growing cyber security crisis, with a staggering 66% of Dutch businesses lacking adequate cyber resilience, according to academic research.   As geopolitical tensions rise and digital threats escalate, Rick van der Kleij, a psychologist and professor in Cyber Resilient Organisations at Avans University of Applied Sciences, who also conducts research at TNO, says that traditional approaches have failed and a paradigm shift is urgently needed.  Van der Kleij suggests that cyber security provides the illusion of safety rather than actual protection for many Dutch organisations. His stark assessment is that the Netherlands’ traditional approach to cyber risk is fundamentally broken.  “We need to stop thinking in terms of cyber security. It’s a model that has demonstrably failed,” he says. “Despite years of investment in cyber security measures, the frequency and impact of incidents continue to increase rapidly across Dutch businesses.”  This reflects the central argument of his recent inaugural lecture “Now that security is no more”, where he called for a paradigm shift in how Dutch organisations approach cyber risks.  Van der Kleij describes “the great digital dilemma” of balancing openness and security in a country with one of Europe’s most advanced digital infrastructures. “How can entrepreneurs remain open and connected without having to completely lock down their businesses?” he asks.  The statistics are stark. Van der Kleij’s study found that 66% of Dutch businesses are inadequately prepared for cyber threats. Recent ABN Amro research confirms the crisis: one in five businesses suffered cyber crime damage last year, rising to nearly 30% among large companies. For the first time, SMEsare more frequently targeted than large corporations, marking a significant shift in cyber criminal strategy.  Despite the numbers, a perception gap persists. Van der Kleij identifies ‘the overconfident’ – Dutch businesses believing their cyber security is adequate when it isn’t. While SME attack rates soar, their risk perception remains static, whereas large organisations show marked awareness increases. This creates a “waterbed effect” – as large companies strengthen defences, cyber criminals shift to less-prepared SMEs which are paradoxically reducing cyber security investments.  Van der Kleij emphasises a crucial distinction: while cyber security focuses on preventing incidents, cyber resilience acknowledges that incidents will happen. “It’s about having the capacity to react appropriately, recover from incidents, and learn from what went wrong to emerge stronger,” he says.  This requires four capabilities – prepare, respond, recover and adapt – yet most Dutch organisations focus only on preparation. The ABN Amro findings confirm this: many SMEs have firewalls but lack intrusion detection or incident response plans. Large companies take a more balanced approach, combining technology with training, response capabilities and insurance.  Uber’s experience illustrates the weakness of purely technical approaches. After a 2016 hack, they implemented two-factor authentication – yet were hacked again in 2022 by an 18-year-old using WhatsApp social engineering. “This shows that investing only in technology without addressing human factors creates fundamental weakness, which is particularly relevant for Dutch businesses that prioritise technological solutions,” van der Kleij adds.  Van der Kleij challenges the persistent myth that humans are cyber security’s weakest link. “People are often blamed when things go wrong, but the actual vulnerabilities typically lie elsewhere in the system, often in the design itself,” he says.  The misdirection is reflected in spending: 85% of cyber security investments go toward technology, 14% toward processes and just 1% toward the human component. Yet the ABN Amro research shows phishing – which succeeds through psychological manipulation rather than sophisticated technology – affects 71% of Dutch businesses.  “We’ve known for decades that people aren’t equipped to remember complex passwords across dozens of accounts, yet we continue demanding this and then express surprise when they create workarounds,” van der Kleij says. “Rather than blaming users, we should design systems that make secure behaviour easier. In the Netherlands, we need more human awareness in security teams, not more security awareness training for end users.”  Why do so many Dutch SMEs fail to invest in cyber resilience despite evident risks? Van der Kleij believes it’s about behaviour, not business size. “It’s not primarily about size or industry – it’s about behaviour and beliefs,” he says.  Common limiting beliefs among Dutch entrepreneurs include “I’m too small to be a target” or “I don’t have confidential information”. Remarkably, even suffering a cyber attack doesn’t change this mindset. “Studies show that when businesses are hacked, it doesn’t automatically lead them to better secure their operations afterward,” van der Kleij says.  The challenge is reaching those who need help most. “We have vouchers, we have arrangements where entrepreneurs can get help at a significantly reduced fee from cyber security professionals, but uptake remains negligible,” van der Kleij says. “It’s always the same parties who come to the government’s door – the large companies who are already mature. The small ones, we just can’t seem to reach them.”  Van der Kleij sees “relational capital” – resources generated through partnerships – as key to enhancing Dutch cyber resilience. “You can become more cyber resilient by establishing partnerships,” he says, pointing to government-encouraged initiatives like Information Sharing and Analysis Centers.   The ABN Amro research reveals why collaboration matters: 39% of large companies experienced cyber incidents originating with suppliers or partners, compared with 25% of smaller firms. This supply chain vulnerability drives major Dutch organisations to demand higher standards from partners through initiatives such as Big Helps Small.  European regulations reinforce this trend. The new NIS2 directive will expand coverage from hundreds to several thousand Dutch companies, yet only 11% have adequately prepared. Among SMEs, approximately half have done little preparation – despite Dutch police warnings about increasingly frequent ransomware attacks where criminals threaten to release stolen data publicly.  Van der Kleij’s current research at Avans University focuses on identifying barriers to cyber resilience investment through focus groups with Dutch entrepreneurs. “When we understand these barriers – which are more likely motivational than knowledge-related – we can design targeted interventions,” he says.  Van der Kleij’s message is stark: “The question isn’t whether your organisation will face a cyber incident, but when – and how effectively you’ll respond. Cyber resilience encompasses cyber security while adding crucial capabilities for response, recovery and adaptation. It’s time for a new paradigm in the Netherlands.”  about Dutch cyber security #dutch #businesses #lag #behind #cyber
    WWW.COMPUTERWEEKLY.COM
    Dutch businesses lag behind in cyber resilience as threats escalate
    The Netherlands is facing a growing cyber security crisis, with a staggering 66% of Dutch businesses lacking adequate cyber resilience, according to academic research.   As geopolitical tensions rise and digital threats escalate, Rick van der Kleij, a psychologist and professor in Cyber Resilient Organisations at Avans University of Applied Sciences, who also conducts research at TNO, says that traditional approaches have failed and a paradigm shift is urgently needed.  Van der Kleij suggests that cyber security provides the illusion of safety rather than actual protection for many Dutch organisations. His stark assessment is that the Netherlands’ traditional approach to cyber risk is fundamentally broken.  “We need to stop thinking in terms of cyber security. It’s a model that has demonstrably failed,” he says. “Despite years of investment in cyber security measures, the frequency and impact of incidents continue to increase rapidly across Dutch businesses.”  This reflects the central argument of his recent inaugural lecture “Now that security is no more”, where he called for a paradigm shift in how Dutch organisations approach cyber risks.  Van der Kleij describes “the great digital dilemma” of balancing openness and security in a country with one of Europe’s most advanced digital infrastructures. “How can entrepreneurs remain open and connected without having to completely lock down their businesses?” he asks.  The statistics are stark. Van der Kleij’s study found that 66% of Dutch businesses are inadequately prepared for cyber threats. Recent ABN Amro research confirms the crisis: one in five businesses suffered cyber crime damage last year, rising to nearly 30% among large companies. For the first time, SMEs (80%) are more frequently targeted than large corporations (75%), marking a significant shift in cyber criminal strategy.  Despite the numbers, a perception gap persists. Van der Kleij identifies ‘the overconfident’ – Dutch businesses believing their cyber security is adequate when it isn’t. While SME attack rates soar, their risk perception remains static, whereas large organisations show marked awareness increases (from 41% to 64%). This creates a “waterbed effect” – as large companies strengthen defences, cyber criminals shift to less-prepared SMEs which are paradoxically reducing cyber security investments.  Van der Kleij emphasises a crucial distinction: while cyber security focuses on preventing incidents, cyber resilience acknowledges that incidents will happen. “It’s about having the capacity to react appropriately, recover from incidents, and learn from what went wrong to emerge stronger,” he says.  This requires four capabilities – prepare, respond, recover and adapt – yet most Dutch organisations focus only on preparation. The ABN Amro findings confirm this: many SMEs have firewalls but lack intrusion detection or incident response plans. Large companies take a more balanced approach, combining technology with training, response capabilities and insurance.  Uber’s experience illustrates the weakness of purely technical approaches. After a 2016 hack, they implemented two-factor authentication – yet were hacked again in 2022 by an 18-year-old using WhatsApp social engineering. “This shows that investing only in technology without addressing human factors creates fundamental weakness, which is particularly relevant for Dutch businesses that prioritise technological solutions,” van der Kleij adds.  Van der Kleij challenges the persistent myth that humans are cyber security’s weakest link. “People are often blamed when things go wrong, but the actual vulnerabilities typically lie elsewhere in the system, often in the design itself,” he says.  The misdirection is reflected in spending: 85% of cyber security investments go toward technology, 14% toward processes and just 1% toward the human component. Yet the ABN Amro research shows phishing – which succeeds through psychological manipulation rather than sophisticated technology – affects 71% of Dutch businesses.  “We’ve known for decades that people aren’t equipped to remember complex passwords across dozens of accounts, yet we continue demanding this and then express surprise when they create workarounds,” van der Kleij says. “Rather than blaming users, we should design systems that make secure behaviour easier. In the Netherlands, we need more human awareness in security teams, not more security awareness training for end users.”  Why do so many Dutch SMEs fail to invest in cyber resilience despite evident risks? Van der Kleij believes it’s about behaviour, not business size. “It’s not primarily about size or industry – it’s about behaviour and beliefs,” he says.  Common limiting beliefs among Dutch entrepreneurs include “I’m too small to be a target” or “I don’t have confidential information”. Remarkably, even suffering a cyber attack doesn’t change this mindset. “Studies show that when businesses are hacked, it doesn’t automatically lead them to better secure their operations afterward,” van der Kleij says.  The challenge is reaching those who need help most. “We have vouchers, we have arrangements where entrepreneurs can get help at a significantly reduced fee from cyber security professionals, but uptake remains negligible,” van der Kleij says. “It’s always the same parties who come to the government’s door – the large companies who are already mature. The small ones, we just can’t seem to reach them.”  Van der Kleij sees “relational capital” – resources generated through partnerships – as key to enhancing Dutch cyber resilience. “You can become more cyber resilient by establishing partnerships,” he says, pointing to government-encouraged initiatives like Information Sharing and Analysis Centers.   The ABN Amro research reveals why collaboration matters: 39% of large companies experienced cyber incidents originating with suppliers or partners, compared with 25% of smaller firms. This supply chain vulnerability drives major Dutch organisations to demand higher standards from partners through initiatives such as Big Helps Small.  European regulations reinforce this trend. The new NIS2 directive will expand coverage from hundreds to several thousand Dutch companies, yet only 11% have adequately prepared. Among SMEs, approximately half have done little preparation – despite Dutch police warnings about increasingly frequent ransomware attacks where criminals threaten to release stolen data publicly.  Van der Kleij’s current research at Avans University focuses on identifying barriers to cyber resilience investment through focus groups with Dutch entrepreneurs. “When we understand these barriers – which are more likely motivational than knowledge-related – we can design targeted interventions,” he says.  Van der Kleij’s message is stark: “The question isn’t whether your organisation will face a cyber incident, but when – and how effectively you’ll respond. Cyber resilience encompasses cyber security while adding crucial capabilities for response, recovery and adaptation. It’s time for a new paradigm in the Netherlands.”  Read more about Dutch cyber security
    0 Комментарии 0 Поделились
CGShares https://cgshares.com