• Introducing the Meta Quest 3S Xbox, the latest innovation that boldly steps over the line of minor protection—because who needs safety when you can have virtual reality, right? Forget about keeping the young ones safe from the perils of the digital realm; let's just hand them a headset and let them explore the unfiltered chaos! After all, what's more thrilling than a little unregulated immersion?

    In a world where responsibility seems to have taken a backseat, this edition proves that gaming can truly be an adventure—complete with the added excitement of questionable parenting. So, strap on those headsets, kids! Reality is overrated anyway.

    #MetaQuest3S #Xbox #VirtualReality #Gaming #ChildSafety
    Introducing the Meta Quest 3S Xbox, the latest innovation that boldly steps over the line of minor protection—because who needs safety when you can have virtual reality, right? Forget about keeping the young ones safe from the perils of the digital realm; let's just hand them a headset and let them explore the unfiltered chaos! After all, what's more thrilling than a little unregulated immersion? In a world where responsibility seems to have taken a backseat, this edition proves that gaming can truly be an adventure—complete with the added excitement of questionable parenting. So, strap on those headsets, kids! Reality is overrated anyway. #MetaQuest3S #Xbox #VirtualReality #Gaming #ChildSafety
    Le Meta Quest 3S Xbox, une édition qui laisse de côté la protection des mineurs ?
    Lorsque vous embarquez dans le monde virtuel avec les casques Meta Quest, vous vivez une […] Cet article Le Meta Quest 3S Xbox, une édition qui laisse de côté la protection des mineurs ? a été publié sur REALITE-VIRTUELLE.COM.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Scientists Detect Unusual Airborne Toxin in the United States for the First Time

    Researchers unexpectedly discovered toxic airborne pollutants in Oklahoma. The image above depicts a field in Oklahoma. Credit: Shutterstock
    University of Colorado Boulder researchers made the first-ever airborne detection of Medium Chain Chlorinated Paraffinsin the Western Hemisphere.
    Sometimes, scientific research feels a lot like solving a mystery. Scientists head into the field with a clear goal and a solid hypothesis, but then the data reveals something surprising. That’s when the real detective work begins.
    This is exactly what happened to a team from the University of Colorado Boulder during a recent field study in rural Oklahoma. They were using a state-of-the-art instrument to track how tiny particles form and grow in the air. But instead of just collecting expected data, they uncovered something completely new: the first-ever airborne detection of Medium Chain Chlorinated Paraffins, a kind of toxic organic pollutant, in the Western Hemisphere. The teams findings were published in ACS Environmental Au.
    “It’s very exciting as a scientist to find something unexpected like this that we weren’t looking for,” said Daniel Katz, CU Boulder chemistry PhD student and lead author of the study. “We’re starting to learn more about this toxic, organic pollutant that we know is out there, and which we need to understand better.”
    MCCPs are currently under consideration for regulation by the Stockholm Convention, a global treaty to protect human health from long-standing and widespread chemicals. While the toxic pollutants have been measured in Antarctica and Asia, researchers haven’t been sure how to document them in the Western Hemisphere’s atmosphere until now.
    From Wastewater to Farmlands
    MCCPs are used in fluids for metal working and in the construction of PVC and textiles. They are often found in wastewater and as a result, can end up in biosolid fertilizer, also called sewage sludge, which is created when liquid is removed from wastewater in a treatment plant. In Oklahoma, researchers suspect the MCCPs they identified came from biosolid fertilizer in the fields near where they set up their instrument.
    “When sewage sludges are spread across the fields, those toxic compounds could be released into the air,” Katz said. “We can’t show directly that that’s happening, but we think it’s a reasonable way that they could be winding up in the air. Sewage sludge fertilizers have been shown to release similar compounds.”
    MCCPs little cousins, Short Chain Chlorinated Paraffins, are currently regulated by the Stockholm Convention, and since 2009, by the EPA here in the United States. Regulation came after studies found the toxic pollutants, which travel far and last a long time in the atmosphere, were harmful to human health. But researchers hypothesize that the regulation of SCCPs may have increased MCCPs in the environment.
    “We always have these unintended consequences of regulation, where you regulate something, and then there’s still a need for the products that those were in,” said Ellie Browne, CU Boulder chemistry professor, CIRES Fellow, and co-author of the study. “So they get replaced by something.”
    Measurement of aerosols led to a new and surprising discovery
    Using a nitrate chemical ionization mass spectrometer, which allows scientists to identify chemical compounds in the air, the team measured air at the agricultural site 24 hours a day for one month. As Katz cataloged the data, he documented the different isotopic patterns in the compounds. The compounds measured by the team had distinct patterns, and he noticed new patterns that he immediately identified as different from the known chemical compounds. With some additional research, he identified them as chlorinated paraffins found in MCCPs.
    Katz says the makeup of MCCPs are similar to PFAS, long-lasting toxic chemicals that break down slowly over time. Known as “forever chemicals,” their presence in soils recently led the Oklahoma Senate to ban biosolid fertilizer.
    Now that researchers know how to measure MCCPs, the next step might be to measure the pollutants at different times throughout the year to understand how levels change each season. Many unknowns surrounding MCCPs remain, and there’s much more to learn about their environmental impacts.
    “We identified them, but we still don’t know exactly what they do when they are in the atmosphere, and they need to be investigated further,” Katz said. “I think it’s important that we continue to have governmental agencies that are capable of evaluating the science and regulating these chemicals as necessary for public health and safety.”
    Reference: “Real-Time Measurements of Gas-Phase Medium-Chain Chlorinated Paraffins Reveal Daily Changes in Gas-Particle Partitioning Controlled by Ambient Temperature” by Daniel John Katz, Bri Dobson, Mitchell Alton, Harald Stark, Douglas R. Worsnop, Manjula R. Canagaratna and Eleanor C. Browne, 5 June 2025, ACS Environmental Au.
    DOI: 10.1021/acsenvironau.5c00038
    Never miss a breakthrough: Join the SciTechDaily newsletter.
    #scientists #detect #unusual #airborne #toxin
    Scientists Detect Unusual Airborne Toxin in the United States for the First Time
    Researchers unexpectedly discovered toxic airborne pollutants in Oklahoma. The image above depicts a field in Oklahoma. Credit: Shutterstock University of Colorado Boulder researchers made the first-ever airborne detection of Medium Chain Chlorinated Paraffinsin the Western Hemisphere. Sometimes, scientific research feels a lot like solving a mystery. Scientists head into the field with a clear goal and a solid hypothesis, but then the data reveals something surprising. That’s when the real detective work begins. This is exactly what happened to a team from the University of Colorado Boulder during a recent field study in rural Oklahoma. They were using a state-of-the-art instrument to track how tiny particles form and grow in the air. But instead of just collecting expected data, they uncovered something completely new: the first-ever airborne detection of Medium Chain Chlorinated Paraffins, a kind of toxic organic pollutant, in the Western Hemisphere. The teams findings were published in ACS Environmental Au. “It’s very exciting as a scientist to find something unexpected like this that we weren’t looking for,” said Daniel Katz, CU Boulder chemistry PhD student and lead author of the study. “We’re starting to learn more about this toxic, organic pollutant that we know is out there, and which we need to understand better.” MCCPs are currently under consideration for regulation by the Stockholm Convention, a global treaty to protect human health from long-standing and widespread chemicals. While the toxic pollutants have been measured in Antarctica and Asia, researchers haven’t been sure how to document them in the Western Hemisphere’s atmosphere until now. From Wastewater to Farmlands MCCPs are used in fluids for metal working and in the construction of PVC and textiles. They are often found in wastewater and as a result, can end up in biosolid fertilizer, also called sewage sludge, which is created when liquid is removed from wastewater in a treatment plant. In Oklahoma, researchers suspect the MCCPs they identified came from biosolid fertilizer in the fields near where they set up their instrument. “When sewage sludges are spread across the fields, those toxic compounds could be released into the air,” Katz said. “We can’t show directly that that’s happening, but we think it’s a reasonable way that they could be winding up in the air. Sewage sludge fertilizers have been shown to release similar compounds.” MCCPs little cousins, Short Chain Chlorinated Paraffins, are currently regulated by the Stockholm Convention, and since 2009, by the EPA here in the United States. Regulation came after studies found the toxic pollutants, which travel far and last a long time in the atmosphere, were harmful to human health. But researchers hypothesize that the regulation of SCCPs may have increased MCCPs in the environment. “We always have these unintended consequences of regulation, where you regulate something, and then there’s still a need for the products that those were in,” said Ellie Browne, CU Boulder chemistry professor, CIRES Fellow, and co-author of the study. “So they get replaced by something.” Measurement of aerosols led to a new and surprising discovery Using a nitrate chemical ionization mass spectrometer, which allows scientists to identify chemical compounds in the air, the team measured air at the agricultural site 24 hours a day for one month. As Katz cataloged the data, he documented the different isotopic patterns in the compounds. The compounds measured by the team had distinct patterns, and he noticed new patterns that he immediately identified as different from the known chemical compounds. With some additional research, he identified them as chlorinated paraffins found in MCCPs. Katz says the makeup of MCCPs are similar to PFAS, long-lasting toxic chemicals that break down slowly over time. Known as “forever chemicals,” their presence in soils recently led the Oklahoma Senate to ban biosolid fertilizer. Now that researchers know how to measure MCCPs, the next step might be to measure the pollutants at different times throughout the year to understand how levels change each season. Many unknowns surrounding MCCPs remain, and there’s much more to learn about their environmental impacts. “We identified them, but we still don’t know exactly what they do when they are in the atmosphere, and they need to be investigated further,” Katz said. “I think it’s important that we continue to have governmental agencies that are capable of evaluating the science and regulating these chemicals as necessary for public health and safety.” Reference: “Real-Time Measurements of Gas-Phase Medium-Chain Chlorinated Paraffins Reveal Daily Changes in Gas-Particle Partitioning Controlled by Ambient Temperature” by Daniel John Katz, Bri Dobson, Mitchell Alton, Harald Stark, Douglas R. Worsnop, Manjula R. Canagaratna and Eleanor C. Browne, 5 June 2025, ACS Environmental Au. DOI: 10.1021/acsenvironau.5c00038 Never miss a breakthrough: Join the SciTechDaily newsletter. #scientists #detect #unusual #airborne #toxin
    SCITECHDAILY.COM
    Scientists Detect Unusual Airborne Toxin in the United States for the First Time
    Researchers unexpectedly discovered toxic airborne pollutants in Oklahoma. The image above depicts a field in Oklahoma. Credit: Shutterstock University of Colorado Boulder researchers made the first-ever airborne detection of Medium Chain Chlorinated Paraffins (MCCPs) in the Western Hemisphere. Sometimes, scientific research feels a lot like solving a mystery. Scientists head into the field with a clear goal and a solid hypothesis, but then the data reveals something surprising. That’s when the real detective work begins. This is exactly what happened to a team from the University of Colorado Boulder during a recent field study in rural Oklahoma. They were using a state-of-the-art instrument to track how tiny particles form and grow in the air. But instead of just collecting expected data, they uncovered something completely new: the first-ever airborne detection of Medium Chain Chlorinated Paraffins (MCCPs), a kind of toxic organic pollutant, in the Western Hemisphere. The teams findings were published in ACS Environmental Au. “It’s very exciting as a scientist to find something unexpected like this that we weren’t looking for,” said Daniel Katz, CU Boulder chemistry PhD student and lead author of the study. “We’re starting to learn more about this toxic, organic pollutant that we know is out there, and which we need to understand better.” MCCPs are currently under consideration for regulation by the Stockholm Convention, a global treaty to protect human health from long-standing and widespread chemicals. While the toxic pollutants have been measured in Antarctica and Asia, researchers haven’t been sure how to document them in the Western Hemisphere’s atmosphere until now. From Wastewater to Farmlands MCCPs are used in fluids for metal working and in the construction of PVC and textiles. They are often found in wastewater and as a result, can end up in biosolid fertilizer, also called sewage sludge, which is created when liquid is removed from wastewater in a treatment plant. In Oklahoma, researchers suspect the MCCPs they identified came from biosolid fertilizer in the fields near where they set up their instrument. “When sewage sludges are spread across the fields, those toxic compounds could be released into the air,” Katz said. “We can’t show directly that that’s happening, but we think it’s a reasonable way that they could be winding up in the air. Sewage sludge fertilizers have been shown to release similar compounds.” MCCPs little cousins, Short Chain Chlorinated Paraffins (SCCPs), are currently regulated by the Stockholm Convention, and since 2009, by the EPA here in the United States. Regulation came after studies found the toxic pollutants, which travel far and last a long time in the atmosphere, were harmful to human health. But researchers hypothesize that the regulation of SCCPs may have increased MCCPs in the environment. “We always have these unintended consequences of regulation, where you regulate something, and then there’s still a need for the products that those were in,” said Ellie Browne, CU Boulder chemistry professor, CIRES Fellow, and co-author of the study. “So they get replaced by something.” Measurement of aerosols led to a new and surprising discovery Using a nitrate chemical ionization mass spectrometer, which allows scientists to identify chemical compounds in the air, the team measured air at the agricultural site 24 hours a day for one month. As Katz cataloged the data, he documented the different isotopic patterns in the compounds. The compounds measured by the team had distinct patterns, and he noticed new patterns that he immediately identified as different from the known chemical compounds. With some additional research, he identified them as chlorinated paraffins found in MCCPs. Katz says the makeup of MCCPs are similar to PFAS, long-lasting toxic chemicals that break down slowly over time. Known as “forever chemicals,” their presence in soils recently led the Oklahoma Senate to ban biosolid fertilizer. Now that researchers know how to measure MCCPs, the next step might be to measure the pollutants at different times throughout the year to understand how levels change each season. Many unknowns surrounding MCCPs remain, and there’s much more to learn about their environmental impacts. “We identified them, but we still don’t know exactly what they do when they are in the atmosphere, and they need to be investigated further,” Katz said. “I think it’s important that we continue to have governmental agencies that are capable of evaluating the science and regulating these chemicals as necessary for public health and safety.” Reference: “Real-Time Measurements of Gas-Phase Medium-Chain Chlorinated Paraffins Reveal Daily Changes in Gas-Particle Partitioning Controlled by Ambient Temperature” by Daniel John Katz, Bri Dobson, Mitchell Alton, Harald Stark, Douglas R. Worsnop, Manjula R. Canagaratna and Eleanor C. Browne, 5 June 2025, ACS Environmental Au. DOI: 10.1021/acsenvironau.5c00038 Never miss a breakthrough: Join the SciTechDaily newsletter.
    Like
    Love
    Wow
    Sad
    Angry
    411
    2 Commentarii 0 Distribuiri 0 previzualizare
  • Powering next-gen services with AI in regulated industries 

    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions. 

    “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa. 

    DOWNLOAD THE FULL REPORT

    For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.” 

    But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations. 

    Download the full report.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #powering #nextgen #services #with #regulated
    Powering next-gen services with AI in regulated industries 
    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions.  “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa.  DOWNLOAD THE FULL REPORT For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.”  But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations.  Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #powering #nextgen #services #with #regulated
    WWW.TECHNOLOGYREVIEW.COM
    Powering next-gen services with AI in regulated industries 
    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions.  “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa.  DOWNLOAD THE FULL REPORT For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.”  But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations.  Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Anthropic launches new Claude service for military and intelligence use

    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    #anthropic #launches #new #claude #service
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More: #anthropic #launches #new #claude #service
    WWW.THEVERGE.COM
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    Like
    Love
    Wow
    Angry
    Sad
    682
    0 Commentarii 0 Distribuiri 0 previzualizare
  • CIO Chaos Mastery: Lessons from Vertiv's Bhavik Rao

    Few roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding.  That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler.   Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together.  What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work.  I also enjoy traveling with my family and listening to podcasts.   What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens. 
    #cio #chaos #mastery #lessons #vertiv039s
    CIO Chaos Mastery: Lessons from Vertiv's Bhavik Rao
    Few roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding.  That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler.   Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together.  What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work.  I also enjoy traveling with my family and listening to podcasts.   What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens.  #cio #chaos #mastery #lessons #vertiv039s
    WWW.INFORMATIONWEEK.COM
    CIO Chaos Mastery: Lessons from Vertiv's Bhavik Rao
    Few roles evolve as quickly as that of the modern CIO. A great way to prepare for a future that is largely unknown is to build your adaptability skills through diverse work experiences, says Bhavik Rao, CIO for the Americas at Vertiv. Learn from your wins and your losses and carry on. Stay free of comfort zones and run towards the chaos. Leaders are born of challenges and not from comfort.Bhavik shares what he’s facing now, how he’s navigating it, and the hard-won lessons that helped shape his approach to IT leadership.Here’s what he had to say:What has your career path looked like so far? I actually started my career as a techno-functional consultant working with the public sector. That early experience gave me a solid grounding in both the technical and process side of enterprise systems. From there, I moved into consulting, which really opened up my world. I had the opportunity to work across multiple industries, leading everything from mobile app development and eCommerce deployments to omnichannel initiatives, data platforms, ERP rollouts, and ultimately large-scale digital transformation and IT strategy programs. It was fast paced, challenging, and incredibly rewarding.  That diversity shaped the way I think today. I learned how to adapt quickly, connect dots across domains, and communicate with everyone from developers to CXOs. Eventually, that path led me to Vertiv, where I now serve as the CIO for the Americas, in addition to leading a couple of global towers, such as data/AI and engineering systems, for example. I’ve been fortunate to lead initiatives that drive operational efficiency, scale GenAI adoption, and turn technology into a true business enabler.   Related:What are the highlights along your career path? There have been several defining moments, both wins and challenges, that have shaped how I lead today. One of the most pivotal chapters has been my time at Vertiv. I joined when the company was still owned by private equity. It was an intense, roll-up-your-sleeves kind of environment. Then, in 2020, we went public -- a huge milestone. But just as we were ramping up our digital transformation, COVID hit, and with it came massive supply chain disruptions. In the middle of all that chaos, I was asked to take over a large-scale transformation program that was struggling. bhBhavik RaoIt wasn’t easy. There were legacy challenges, resistance to change, and real execution pressure. But we rallied, restructured the program, and launched it. That experience taught me a lot about leading under pressure, aligning teams around outcomes, and staying focused even when everything feels like it’s shifting. Related:Another major learning moment was earlier in my career when I lost a large national account I’d spent over seven years building. That was a tough one, but it taught me resilience. I learned not to attach my identity to any one outcome and to keep moving forward with purpose. Then, there are the moments of creation, like launching VeGA, our internal GenAI platform at Vertiv. Seeing it go from idea to impact, with thousands of users and 100+ applications, has been incredibly energizing. It reminded me how powerful it is when innovation meets execution. I’ve also learned the power of being a “player-coach.” I don’t believe in leading from a distance. I get involved, understand the challenges on the ground, and then help teams move forward together.  What’s your vision for the future of sovereign AI? For me, sovereign AI isn’t just a regulatory checkbox; it’s about strategic autonomy. At our company, we are trying to be very intentional about how we scale AI responsibly across our global footprint. So, when I think about sovereign AI, I define it as the ability to control how, where, and why AI is built and deployed with full alignment to your business needs, risk posture, and data boundaries. Related:I’ve seen firsthand how AI becomes a competitive advantage only when you have governance, infrastructure flexibility, and contextual intelligence built in. Our work with VeGA, for example, has shown that employees adopt AI much faster when it’s embedded into secure, business-aligned workflows and not just bolted on from the outside. For CIOs, the shift to sovereign AI means: Designing AI infrastructure that can flex whether it’s hosted internally, cloud-based, or hybrid Building internal AI fluency so your teams aren't fully reliant on black-box solutions Creating a framework for trust and explainability, especially as AI touches regulated and legal processes It’s not about doing everything in-house, but it is about knowing what’s mission-critical to control. In my view, sovereign AI is less about isolation and more about intentional ownership. What do you do for fun or to relax? Golf is my go-to. It keeps me grounded and humble! It’s one of those games that’s as much about mindset as it is about mechanics. I try to work out regularly when I am not traveling for work.  I also enjoy traveling with my family and listening to podcasts.   What advice would you give to young people considering a leadership path in IT? Be curious, stay hands-on, don’t rush the title, and focus on impact. Learn the business, not just the tech. Some of the best technologists I’ve worked with are the ones who understand how a supply chain works or how a sale actually closes. Also, don’t be afraid to take on messy, undefined problems. Run toward the chaos. That’s where leadership is born. And finally, surround yourself with people smarter than you. Build teams that challenge you. That’s where real growth happens. 
    Like
    Love
    Wow
    Sad
    Angry
    671
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Insites: Addressing the Northern housing crisis

    The housing crisis in Canada’s North, which has particularly affected the majority Indigenous population in northern communities, has been of ongoing concern to firms such as Taylor Architecture Group. Formerly known as Pin/Taylor, the firm was established in Yellowknife in 1983. TAG’s Principal, Simon Taylor, says that despite recent political gains for First Nations, “by and large, life is not improving up here.”
    Taylor and his colleagues have designed many different types of housing across the North. But the problems exceed the normal scope of architectural practice. TAG’s Manager of Research and Development, Kristel Derkowski, says, “We can design the units well, but it doesn’t solve many of the underlying problems.” To respond, she says, “we’ve backed up the process to look at the root causes more.” As a result, “the design challenges are informed by much broader systemic research.” 
    We spoke to Derkowski about her research, and the work that Taylor Architecture Group is doing to act on it. Here’s what she has to say.
    Inadequate housing from the start
    The Northwest Territories is about 51% Indigenous. Most non-Indigenous people are concentrated in the capital city of Yellowknife. Outside of Yellowknife, the territory is very much majority Indigenous. 
    The federal government got involved in delivering housing to the far North in 1959. There were problems with this program right from the beginning. One issue was that when the houses were first delivered, they were designed and fabricated down south, and they were completely inadequate for the climate. The houses from that initial program were called “Matchbox houses” because they were so small. These early stages of housing delivery helped establish the precedent that a lower standard of housing was acceptable for northern Indigenous residents compared to Euro-Canadian residents elsewhere. In many cases, that double-standard persists to this day.
    The houses were also inappropriately designed for northern cultures. It’s been said in the research that the way that these houses were delivered to northern settlements was a significant factor in people being divorced from their traditional lifestyles, their traditional hierarchies, the way that they understood home. It was imposing a Euro-Canadian model on Indigenous communities and their ways of life. 
    Part of what the federal government was trying to do was to impose a cash economy and stimulate a market. They were delivering houses and asking for rent. But there weren’t a lot of opportunities to earn cash. This housing was delivered around the sites of former fur trading posts—but the fur trade had collapsed by 1930. There weren’t a lot of jobs. There wasn’t a lot of wage-based employment. And yet, rental payments were being collected in cash, and the rental payments increased significantly over the span of a couple decades. 
    The imposition of a cash economy created problems culturally. It’s been said that public housing delivery, in combination with other social policies, served to introduce the concept of poverty in the far North, where it hadn’t existed before. These policies created a situation where Indigenous northerners couldn’t afford to be adequately housed, because housing demanded cash, and cash wasn’t always available. That’s a big theme that continues to persist today. Most of the territory’s communities remain “non-market”: there is no housing market. There are different kinds of economies in the North—and not all of them revolve wholly around cash. And yet government policies do. The governments’ ideas about housing do, too. So there’s a conflict there. 
    The federal exit from social housing
    After 1969, the federal government devolved housing to the territorial government. The Government of Northwest Territories created the Northwest Territories Housing Corporation. By 1974, the housing corporation took over all the stock of federal housing and started to administer it, in addition to building their own. The housing corporation was rapidly building new housing stock from 1975 up until the mid-1990s. But beginning in the early 1990s, the federal government terminated federal spending on new social housing across the whole country. A couple of years after that, they also decided to allow operational agreements with social housing providers to expire. It didn’t happen that quickly—and maybe not everybody noticed, because it wasn’t a drastic change where all operational funding disappeared immediately. But at that time, the federal government was in 25- to 50-year operational agreements with various housing providers across the country. After 1995, these long-term operating agreements were no longer being renewed—not just in the North, but everywhere in Canada. 
    With the housing corporation up here, that change started in 1996, and we have until 2038 before the federal contribution of operational funding reaches zero. As a result, beginning in 1996, the number of units owned by the NWT Housing Corporation plateaued. There was a little bump in housing stock after that—another 200 units or so in the early 2000s. But basically, the Northwest Territories was stuck for 25 years, from 1996 to 2021, with the same number of public housing units.
    In 1990, there was a report on housing in the NWT that was funded by the Canada Mortgage and Housing Corporation. That report noted that housing was already in a crisis state. At that time, in 1990, researchers said it would take 30 more years to meet existing housing need, if housing production continued at the current rate. The other problem is that houses were so inadequately constructed to begin with, that they generally needed replacement after 15 years. So housing in the Northwest Territories already had serious problems in 1990. Then in 1996, the housing corporation stopped building more. So if you compare the total number of social housing units with the total need for subsidized housing in the territory, you can see a severely widening gap in recent decades. We’ve seen a serious escalation in housing need.
    The Northwest Territories has a very, very small tax base, and it’s extremely expensive to provide services here. Most of our funding for public services comes from the federal government. The NWT on its own does not have a lot of buying power. So ever since the federal government stopped providing operational funding for housing, the territorial government has been hard-pressed to replace that funding with its own internal resources.
    I should probably note that this wasn’t only a problem for the Northwest Territories. Across Canada, we have seen mass homelessness visibly emerge since the ’90s. This is related, at least in part, to the federal government’s decisions to terminate funding for social housing at that time.

    Today’s housing crisis
    Getting to present-day conditions in the NWT, we now have some “market” communities and some “non-market” communities. There are 33 communities total in the NWT, and at least 27 of these don’t have a housing market: there’s no private rental market and there’s no resale market. This relates back to the conflict I mentioned before: the cash economy did not entirely take root. In simple terms, there isn’t enough local employment or income opportunity for a housing market—in conventional terms—to work. 
    Yellowknife is an outlier in the territory. Economic opportunity is concentrated in the capital city. We also have five other “market” communities that are regional centres for the territorial government, where more employment and economic activity take place. Across the non-market communities, on average, the rate of unsuitable or inadequate housing is about five times what it is elsewhere in Canada. Rates of unemployment are about five times what they are in Yellowknife. On top of this, the communities with the highest concentration of Indigenous residents also have the highest rates of unsuitable or inadequate housing, and also have the lowest income opportunity. These statistics clearly show that the inequalities in the territory are highly racialized. 
    Given the situation in non-market communities, there is a severe affordability crisis in terms of the cost to deliver housing. It’s very, very expensive to build housing here. A single detached home costs over a million dollars to build in a place like Fort Good Hope. We’re talking about a very modest three-bedroom house, smaller than what you’d typically build in the South. The million-dollar price tag on each house is a serious issue. Meanwhile, in a non-market community, the potential resale value is extremely low. So there’s a massive gap between the cost of construction and the value of the home once built—and that’s why you have no housing market. It means that private development is impossible. That’s why, until recently, only the federal and territorial governments have been building new homes in non-market communities. It’s so expensive to do, and as soon as the house is built, its value plummets. 

    The costs of living are also very high. According to the NWT Bureau of Statistics, the estimated living costs for an individual in Fort Good Hope are about 1.8 times what it costs to live in Edmonton. Then when it comes to housing specifically, there are further issues with operations and maintenance. The NWT is not tied into the North American hydro grid, and in most communities, electricity is produced by a diesel generator. This is extremely expensive. Everything needs to be shipped in, including fuel. So costs for heating fuel are high as well, as are the heating loads. Then, maintenance and repairs can be very difficult, and of course, very costly. If you need any specialized parts or specialized labour, you are flying those parts and those people in from down South. So to take on the costs of homeownership, on top of the costs of living—in a place where income opportunity is limited to begin with—this is extremely challenging. And from a statistical or systemic perspective, this is simply not in reach for most community members.
    In 2021, the NWT Housing Corporation underwent a strategic renewal and became Housing Northwest Territories. Their mandate went into a kind of flux. They started to pivot from being the primary landlord in the territory towards being a partner to other third-party housing providers, which might be Indigenous governments, community housing providers, nonprofits, municipalities. But those other organisations, in most cases, aren’t equipped or haven’t stepped forward to take on social housing.
    Even though the federal government is releasing capital funding for affordable housing again, northern communities can’t always capitalize on that, because the source of funding for operations remains in question. Housing in non-market communities essentially needs to be subsidized—not just in terms of construction, but also in terms of operations. But that operational funding is no longer available. I can’t stress enough how critical this issue is for the North.
    Fort Good Hope and “one thing thatworked”
    I’ll talk a bit about Fort Good Hope. I don’t want to be speaking on behalf of the community here, but I will share a bit about the realities on the ground, as a way of putting things into context. 
    Fort Good Hope, or Rádeyı̨lı̨kóé, is on the Mackenzie River, close to the Arctic Circle. There’s a winter road that’s open at best from January until March—the window is getting narrower because of climate change. There were also barges running each summer for material transportation, but those have been cancelled for the past two years because of droughts linked to climate change. Aside from that, it’s a fly-in community. It’s very remote. It has about 500-600 people. According to census data, less than half of those people live in what’s considered acceptable housing. 
    The biggest problem is housing adequacy. That’s CMHC’s term for housing in need of major repairs. This applies to about 36% of households in Fort Good Hope. In terms of ownership, almost 40% of the community’s housing stock is managed by Housing NWT. That’s a combination of public housing units and market housing units—which are for professionals like teachers and nurses. There’s also a pretty high percentage of owner-occupied units—about 46%. 
    The story told by the community is that when public housing arrived in the 1960s, the people were living in owner-built log homes. Federal agents arrived and they considered some of those homes to be inadequate or unacceptable, and they bulldozed those homes, then replaced some of them—but maybe not all—with public housing units. Then residents had no choice but to rent from the people who took their homes away. This was not a good way to start up a public housing system.
    The state of housing in Fort Good Hope
    Then there was an issue with the rental rates, which drastically increased over time. During a presentation to a government committee in the ’80s, a community member explained that they had initially accepted a place in public housing for a rental fee of a month in 1971. By 1984, the same community member was expected to pay a month. That might not sound like much in today’s terms, but it was roughly a 13,000% increase for that same tenant—and it’s not like they had any other housing options to choose from. So by that point, they’re stuck with paying whatever is asked. 
    On top of that, the housing units were poorly built and rapidly deteriorated. One description from that era said the walls were four inches thick, with windows oriented north, and water tanks that froze in the winter and fell through the floor. The single heating source was right next to the only door—residents were concerned about the fire hazard that obviously created. Ultimately the community said: “We don’t actually want any more public housing units. We want to go back to homeownership, which was what we had before.” 
    So Fort Good Hope was a leader in housing at that time and continues to be to this day. The community approached the territorial government and made a proposal: “Give us the block funding for home construction, we’ll administer it ourselves, we’ll help people build houses, and they can keep them.” That actually worked really well. That was the start of the Homeownership Assistance Programthat ran for about ten years, beginning in 1982. The program expanded across the whole territory after it was piloted in Fort Good Hope. The HAP is still spoken about and written about as the one thing that kind of worked. 
    Self-built log cabins remain from Fort Good Hope’s 1980s Homeownership Program.
    Funding was cost-shared between the federal and territorial governments. Through the program, material packages were purchased for clients who were deemed eligible. The client would then contribute their own sweat equity in the form of hauling logs and putting in time on site. They had two years to finish building the house. Then, as long as they lived in that home for five more years, the loan would be forgiven, and they would continue owning the house with no ongoing loan payments. In some cases, there were no mechanical systems provided as part of this package, but the residents would add to the house over the years. A lot of these units are still standing and still lived in today. Many of them are comparatively well-maintained in contrast with other types of housing—for example, public housing units. It’s also worth noting that the one-time cost of the materials package was—from the government’s perspective—only a fraction of the cost to build and maintain a public housing unit over its lifespan. At the time, it cost about to to build a HAP home, whereas the lifetime cost of a public housing unit is in the order of This program was considered very successful in many places, especially in Fort Good Hope. It created about 40% of their local housing stock at that time, which went from about 100 units to about 140. It’s a small community, so that’s quite significant. 
    What were the successful principles?

    The community-based decision-making power to allocate the funding.
    The sweat equity component, which brought homeownership within the range of being attainable for people—because there wasn’t cash needing to be transferred, when the cash wasn’t available.
    Local materials—they harvested the logs from the land, and the fact that residents could maintain the homes themselves.

    The Fort Good Hope Construction Centre. Rendering by Taylor Architecture Group
    The Fort Good Hope Construction Centre
    The HAP ended the same year that the federal government terminated new spending on social housing. By the late 1990s, the creation of new public housing stock or new homeownership units had gone down to negligible levels. But more recently, things started to change. The federal government started to release money to build affordable housing. Simultaneously, Indigenous governments are working towards Self-Government and settling their Land Claims. Federal funds have started to flow directly to Indigenous groups. Given these changes, the landscape of Northern housing has started to evolve.
    In 2016, Fort Good Hope created the K’asho Got’ine Housing Society, based on the precedent of the 1980s Fort Good Hope Housing Society. They said: “We did this before, maybe we can do it again.” The community incorporated a non-profit and came up with a five-year plan to meet housing need in their community.
    One thing the community did right away was start up a crew to deliver housing maintenance and repairs. This is being run by Ne’Rahten Developments Ltd., which is the business arm of Yamoga Land Corporation. Over the span of a few years, they built up a crew of skilled workers. Then Ne’Rahten started thinking, “Why can’t we do more? Why can’t we build our own housing?” They identified a need for a space where people could work year-round, and first get training, then employment, in a stable all-season environment.
    This was the initial vision for the Fort Good Hope Construction Centre, and this is where TAG got involved. We had some seed funding through the CMHC Housing Supply Challenge when we partnered with Fort Good Hope.
    We worked with the community for over a year to get the capital funding lined up for the project. This process required us to take on a different role than the one you typically would as an architect. It wasn’t just schematic-design-to-construction-administration. One thing we did pretty early on was a housing design workshop that was open to the whole community, to start understanding what type of housing people would really want to see. Another piece was a lot of outreach and advocacy to build up support for the project and partnerships—for example, with Housing Northwest Territories and Aurora College. We also reached out to our federal MP, the NWT Legislative Assembly and different MLAs, and we talked to a lot of different people about the link between employment and housing. The idea was that the Fort Good Hope Construction Centre would be a demonstration project. Ultimately, funding did come through for the project—from both CMHC and National Indigenous Housing Collaborative Inc.
    The facility itself will not be architecturally spectacular. It’s basically a big shed where you could build a modular house. But the idea is that the construction of those houses is combined with training, and it creates year-round indoor jobs. It intends to combat the short construction seasons, and the fact that people would otherwise be laid off between projects—which makes it very hard to progress with your training or your career. At the same time, the Construction Centre will build up a skilled labour force that otherwise wouldn’t exist—because when there’s no work, skilled people tend to leave the community. And, importantly, the idea is to keep capital funding in the community. So when there’s a new arena that needs to get built, when there’s a new school that needs to get built, you have a crew of people who are ready to take that on. Rather than flying in skilled labourers, you actually have the community doing it themselves. It’s working towards self-determination in housing too, because if those modular housing units are being built in the community, by community members, then eventually they’re taking over design decisions and decisions about maintenance—in a way that hasn’t really happened for decades.
    Transitional homeownership
    My research also looked at a transitional homeownership model that adapts some of the successful principles of the 1980s HAP. Right now, in non-market communities, there are serious gaps in the housing continuum—that is, the different types of housing options available to people. For the most part, you have public housing, and you have homelessness—mostly in the form of hidden homelessness, where people are sleeping on the couches of relatives. Then, in some cases, you have inherited homeownership—where people got homes through the HAP or some other government program.
    But for the most part, not a lot of people in non-market communities are actually moving into homeownership anymore. I asked the local housing manager in Fort Good Hope: “When’s the last time someone built a house in the community?” She said, “I can only think of one person. It was probably about 20 years ago, and that person actually went to the bank and got a mortgage. If people have a home, it’s usually inherited from their parents or from relatives.” And that situation is a bit of a problem in itself, because it means that people can’t move out of public housing. Public housing traps you in a lot of ways. For example, it punishes employment, because rent is geared to income. It’s been said many times that this model disincentivizes employment. I was in a workshop last year where an Indigenous person spoke up and said, “Actually, it’s not disincentivizing, it punishes employment. It takes things away from you.”
    Somebody at the territorial housing corporation in Yellowknife told me, “We have clients who are over the income threshold for public housing, but there’s nowhere else they can go.” Theoretically, they would go to the private housing market, they would go to market housing, or they would go to homeownership, but those options don’t exist or they aren’t within reach. 
    So the idea with the transitional homeownership model is to create an option that could allow the highest income earners in a non-market community to move towards homeownership. This could take some pressure off the public housing system. And it would almost be like a wealth distribution measure: people who are able to afford the cost of operating and maintaining a home then have that option, instead of remaining in government-subsidized housing. For those who cannot, the public housing system is still an option—and maybe a few more public housing units are freed up. 
    I’ve developed about 36 recommendations for a transitional homeownership model in northern non-market communities. The recommendations are meant to be actioned at various scales: at the scale of the individual household, the scale of the housing provider, and the scale of the whole community. The idea is that if you look at housing as part of a whole system, then there are certain moves that might make sense here—in a non-market context especially—that wouldn’t make sense elsewhere. So for example, we’re in a situation where a house doesn’t appreciate in value. It’s not a financial asset, it’s actually a financial liability, and it’s something that costs a lot to maintain over the years. Giving someone a house in a non-market community is actually giving them a burden, but some residents would be quite willing to take this on, just to have an option of getting out of public housing. It just takes a shift in mindset to start considering solutions for that kind of context.
    One particularly interesting feature of non-market communities is that they’re still functioning with a mixed economy: partially a subsistence-based or traditional economy, and partially a cash economy. I think that’s actually a strength that hasn’t been tapped into by territorial and federal policies. In the far North, in-kind and traditional economies are still very much a way of life. People subsidize their groceries with “country food,” which means food that was harvested from the land. And instead of paying for fuel tank refills in cash, many households in non-market communities are burning wood as their primary heat source. In communities south of the treeline, like Fort Good Hope, that wood is also harvested from the land. Despite there being no exchange of cash involved, these are critical economic activities—and they are also part of a sustainable, resilient economy grounded in local resources and traditional skills.
    This concept of the mixed economy could be tapped into as part of a housing model, by bringing back the idea of a ‘sweat equity’ contribution instead of a down payment—just like in the HAP. Contributing time and labour is still an economic exchange, but it bypasses the ‘cash’ part—the part that’s still hard to come by in a non-market community. Labour doesn’t have to be manual labour, either. There are all kinds of work that need to take place in a community: maybe taking training courses and working on projects at the Construction Centre, maybe helping out at the Band Office, or providing childcare services for other working parents—and so on. So it could be more inclusive than a model that focuses on manual labour.
    Another thing to highlight is a rent-to-own trial period. Not every client will be equipped to take on the burdens of homeownership. So you can give people a trial period. If it doesn’t work out and they can’t pay for operations and maintenance, they could continue renting without losing their home.
    Then it’s worth touching on some basic design principles for the homeownership units. In the North, the solutions that work are often the simplest—not the most technologically innovative. When you’re in a remote location, specialized replacement parts and specialized labour are both difficult to come by. And new technologies aren’t always designed for extreme climates—especially as we trend towards the digital. So rather than installing technologically complex, high-efficiency systems, it actually makes more sense to build something that people are comfortable with, familiar with, and willing to maintain. In a southern context, people suggest solutions like solar panels to manage energy loads. But in the North, the best thing you can do for energy is put a woodstove in the house. That’s something we’ve heard loud and clear in many communities. Even if people can’t afford to fill their fuel tank, they’re still able to keep chopping wood—or their neighbour is, or their brother, or their kid, and so on. It’s just a different way of looking at things and a way of bringing things back down to earth, back within reach of community members. 
    Regulatory barriers to housing access: Revisiting the National Building Code
    On that note, there’s one more project I’ll touch on briefly. TAG is working on a research study, funded by Housing, Infrastructure and Communities Canada, which looks at regulatory barriers to housing access in the North. The National Building Codehas evolved largely to serve the southern market context, where constraints and resources are both very different than they are up here. Technical solutions in the NBC are based on assumptions that, in some cases, simply don’t apply in northern communities.
    Here’s a very simple example: minimum distance to a fire hydrant. Most of our communities don’t have fire hydrants at all. We don’t have municipal services. The closest hydrant might be thousands of kilometres away. So what do we do instead? We just have different constraints to consider.
    That’s just one example but there are many more. We are looking closely at the NBC, and we are also working with a couple of different communities in different situations. The idea is to identify where there are conflicts between what’s regulated and what’s actually feasible, viable, and practical when it comes to on-the-ground realities. Then we’ll look at some alternative solutions for housing. The idea is to meet the intent of the NBC, but arrive at some technical solutions that are more practical to build, easier to maintain, and more appropriate for northern communities. 
    All of the projects I’ve just described are fairly recent, and very much still ongoing. We’ll see how it all plays out. I’m sure we’re going to run into a lot of new barriers and learn a lot more on the way, but it’s an incremental trial-and-error process. Even with the Construction Centre, we’re saying that this is a demonstration project, but how—or if—it rolls out in other communities would be totally community-dependent, and it could look very, very different from place to place. 
    In doing any research on Northern housing, one of the consistent findings is that there is no one-size-fits-all solution. Northern communities are not all the same. There are all kinds of different governance structures, different climates, ground conditions, transportation routes, different population sizes, different people, different cultures. Communities are Dene, Métis, Inuvialuit, as well as non-Indigenous, all with different ways of being. One-size-fits-all solutions don’t work—they never have. And the housing crisis is complex, and it’s difficult to unravel. So we’re trying to move forward with a few different approaches, maybe in a few different places, and we’re hoping that some communities, some organizations, or even some individual people, will see some positive impacts.

     As appeared in the June 2025 issue of Canadian Architect magazine 

    The post Insites: Addressing the Northern housing crisis appeared first on Canadian Architect.
    #insites #addressing #northern #housing #crisis
    Insites: Addressing the Northern housing crisis
    The housing crisis in Canada’s North, which has particularly affected the majority Indigenous population in northern communities, has been of ongoing concern to firms such as Taylor Architecture Group. Formerly known as Pin/Taylor, the firm was established in Yellowknife in 1983. TAG’s Principal, Simon Taylor, says that despite recent political gains for First Nations, “by and large, life is not improving up here.” Taylor and his colleagues have designed many different types of housing across the North. But the problems exceed the normal scope of architectural practice. TAG’s Manager of Research and Development, Kristel Derkowski, says, “We can design the units well, but it doesn’t solve many of the underlying problems.” To respond, she says, “we’ve backed up the process to look at the root causes more.” As a result, “the design challenges are informed by much broader systemic research.”  We spoke to Derkowski about her research, and the work that Taylor Architecture Group is doing to act on it. Here’s what she has to say. Inadequate housing from the start The Northwest Territories is about 51% Indigenous. Most non-Indigenous people are concentrated in the capital city of Yellowknife. Outside of Yellowknife, the territory is very much majority Indigenous.  The federal government got involved in delivering housing to the far North in 1959. There were problems with this program right from the beginning. One issue was that when the houses were first delivered, they were designed and fabricated down south, and they were completely inadequate for the climate. The houses from that initial program were called “Matchbox houses” because they were so small. These early stages of housing delivery helped establish the precedent that a lower standard of housing was acceptable for northern Indigenous residents compared to Euro-Canadian residents elsewhere. In many cases, that double-standard persists to this day. The houses were also inappropriately designed for northern cultures. It’s been said in the research that the way that these houses were delivered to northern settlements was a significant factor in people being divorced from their traditional lifestyles, their traditional hierarchies, the way that they understood home. It was imposing a Euro-Canadian model on Indigenous communities and their ways of life.  Part of what the federal government was trying to do was to impose a cash economy and stimulate a market. They were delivering houses and asking for rent. But there weren’t a lot of opportunities to earn cash. This housing was delivered around the sites of former fur trading posts—but the fur trade had collapsed by 1930. There weren’t a lot of jobs. There wasn’t a lot of wage-based employment. And yet, rental payments were being collected in cash, and the rental payments increased significantly over the span of a couple decades.  The imposition of a cash economy created problems culturally. It’s been said that public housing delivery, in combination with other social policies, served to introduce the concept of poverty in the far North, where it hadn’t existed before. These policies created a situation where Indigenous northerners couldn’t afford to be adequately housed, because housing demanded cash, and cash wasn’t always available. That’s a big theme that continues to persist today. Most of the territory’s communities remain “non-market”: there is no housing market. There are different kinds of economies in the North—and not all of them revolve wholly around cash. And yet government policies do. The governments’ ideas about housing do, too. So there’s a conflict there.  The federal exit from social housing After 1969, the federal government devolved housing to the territorial government. The Government of Northwest Territories created the Northwest Territories Housing Corporation. By 1974, the housing corporation took over all the stock of federal housing and started to administer it, in addition to building their own. The housing corporation was rapidly building new housing stock from 1975 up until the mid-1990s. But beginning in the early 1990s, the federal government terminated federal spending on new social housing across the whole country. A couple of years after that, they also decided to allow operational agreements with social housing providers to expire. It didn’t happen that quickly—and maybe not everybody noticed, because it wasn’t a drastic change where all operational funding disappeared immediately. But at that time, the federal government was in 25- to 50-year operational agreements with various housing providers across the country. After 1995, these long-term operating agreements were no longer being renewed—not just in the North, but everywhere in Canada.  With the housing corporation up here, that change started in 1996, and we have until 2038 before the federal contribution of operational funding reaches zero. As a result, beginning in 1996, the number of units owned by the NWT Housing Corporation plateaued. There was a little bump in housing stock after that—another 200 units or so in the early 2000s. But basically, the Northwest Territories was stuck for 25 years, from 1996 to 2021, with the same number of public housing units. In 1990, there was a report on housing in the NWT that was funded by the Canada Mortgage and Housing Corporation. That report noted that housing was already in a crisis state. At that time, in 1990, researchers said it would take 30 more years to meet existing housing need, if housing production continued at the current rate. The other problem is that houses were so inadequately constructed to begin with, that they generally needed replacement after 15 years. So housing in the Northwest Territories already had serious problems in 1990. Then in 1996, the housing corporation stopped building more. So if you compare the total number of social housing units with the total need for subsidized housing in the territory, you can see a severely widening gap in recent decades. We’ve seen a serious escalation in housing need. The Northwest Territories has a very, very small tax base, and it’s extremely expensive to provide services here. Most of our funding for public services comes from the federal government. The NWT on its own does not have a lot of buying power. So ever since the federal government stopped providing operational funding for housing, the territorial government has been hard-pressed to replace that funding with its own internal resources. I should probably note that this wasn’t only a problem for the Northwest Territories. Across Canada, we have seen mass homelessness visibly emerge since the ’90s. This is related, at least in part, to the federal government’s decisions to terminate funding for social housing at that time. Today’s housing crisis Getting to present-day conditions in the NWT, we now have some “market” communities and some “non-market” communities. There are 33 communities total in the NWT, and at least 27 of these don’t have a housing market: there’s no private rental market and there’s no resale market. This relates back to the conflict I mentioned before: the cash economy did not entirely take root. In simple terms, there isn’t enough local employment or income opportunity for a housing market—in conventional terms—to work.  Yellowknife is an outlier in the territory. Economic opportunity is concentrated in the capital city. We also have five other “market” communities that are regional centres for the territorial government, where more employment and economic activity take place. Across the non-market communities, on average, the rate of unsuitable or inadequate housing is about five times what it is elsewhere in Canada. Rates of unemployment are about five times what they are in Yellowknife. On top of this, the communities with the highest concentration of Indigenous residents also have the highest rates of unsuitable or inadequate housing, and also have the lowest income opportunity. These statistics clearly show that the inequalities in the territory are highly racialized.  Given the situation in non-market communities, there is a severe affordability crisis in terms of the cost to deliver housing. It’s very, very expensive to build housing here. A single detached home costs over a million dollars to build in a place like Fort Good Hope. We’re talking about a very modest three-bedroom house, smaller than what you’d typically build in the South. The million-dollar price tag on each house is a serious issue. Meanwhile, in a non-market community, the potential resale value is extremely low. So there’s a massive gap between the cost of construction and the value of the home once built—and that’s why you have no housing market. It means that private development is impossible. That’s why, until recently, only the federal and territorial governments have been building new homes in non-market communities. It’s so expensive to do, and as soon as the house is built, its value plummets.  The costs of living are also very high. According to the NWT Bureau of Statistics, the estimated living costs for an individual in Fort Good Hope are about 1.8 times what it costs to live in Edmonton. Then when it comes to housing specifically, there are further issues with operations and maintenance. The NWT is not tied into the North American hydro grid, and in most communities, electricity is produced by a diesel generator. This is extremely expensive. Everything needs to be shipped in, including fuel. So costs for heating fuel are high as well, as are the heating loads. Then, maintenance and repairs can be very difficult, and of course, very costly. If you need any specialized parts or specialized labour, you are flying those parts and those people in from down South. So to take on the costs of homeownership, on top of the costs of living—in a place where income opportunity is limited to begin with—this is extremely challenging. And from a statistical or systemic perspective, this is simply not in reach for most community members. In 2021, the NWT Housing Corporation underwent a strategic renewal and became Housing Northwest Territories. Their mandate went into a kind of flux. They started to pivot from being the primary landlord in the territory towards being a partner to other third-party housing providers, which might be Indigenous governments, community housing providers, nonprofits, municipalities. But those other organisations, in most cases, aren’t equipped or haven’t stepped forward to take on social housing. Even though the federal government is releasing capital funding for affordable housing again, northern communities can’t always capitalize on that, because the source of funding for operations remains in question. Housing in non-market communities essentially needs to be subsidized—not just in terms of construction, but also in terms of operations. But that operational funding is no longer available. I can’t stress enough how critical this issue is for the North. Fort Good Hope and “one thing thatworked” I’ll talk a bit about Fort Good Hope. I don’t want to be speaking on behalf of the community here, but I will share a bit about the realities on the ground, as a way of putting things into context.  Fort Good Hope, or Rádeyı̨lı̨kóé, is on the Mackenzie River, close to the Arctic Circle. There’s a winter road that’s open at best from January until March—the window is getting narrower because of climate change. There were also barges running each summer for material transportation, but those have been cancelled for the past two years because of droughts linked to climate change. Aside from that, it’s a fly-in community. It’s very remote. It has about 500-600 people. According to census data, less than half of those people live in what’s considered acceptable housing.  The biggest problem is housing adequacy. That’s CMHC’s term for housing in need of major repairs. This applies to about 36% of households in Fort Good Hope. In terms of ownership, almost 40% of the community’s housing stock is managed by Housing NWT. That’s a combination of public housing units and market housing units—which are for professionals like teachers and nurses. There’s also a pretty high percentage of owner-occupied units—about 46%.  The story told by the community is that when public housing arrived in the 1960s, the people were living in owner-built log homes. Federal agents arrived and they considered some of those homes to be inadequate or unacceptable, and they bulldozed those homes, then replaced some of them—but maybe not all—with public housing units. Then residents had no choice but to rent from the people who took their homes away. This was not a good way to start up a public housing system. The state of housing in Fort Good Hope Then there was an issue with the rental rates, which drastically increased over time. During a presentation to a government committee in the ’80s, a community member explained that they had initially accepted a place in public housing for a rental fee of a month in 1971. By 1984, the same community member was expected to pay a month. That might not sound like much in today’s terms, but it was roughly a 13,000% increase for that same tenant—and it’s not like they had any other housing options to choose from. So by that point, they’re stuck with paying whatever is asked.  On top of that, the housing units were poorly built and rapidly deteriorated. One description from that era said the walls were four inches thick, with windows oriented north, and water tanks that froze in the winter and fell through the floor. The single heating source was right next to the only door—residents were concerned about the fire hazard that obviously created. Ultimately the community said: “We don’t actually want any more public housing units. We want to go back to homeownership, which was what we had before.”  So Fort Good Hope was a leader in housing at that time and continues to be to this day. The community approached the territorial government and made a proposal: “Give us the block funding for home construction, we’ll administer it ourselves, we’ll help people build houses, and they can keep them.” That actually worked really well. That was the start of the Homeownership Assistance Programthat ran for about ten years, beginning in 1982. The program expanded across the whole territory after it was piloted in Fort Good Hope. The HAP is still spoken about and written about as the one thing that kind of worked.  Self-built log cabins remain from Fort Good Hope’s 1980s Homeownership Program. Funding was cost-shared between the federal and territorial governments. Through the program, material packages were purchased for clients who were deemed eligible. The client would then contribute their own sweat equity in the form of hauling logs and putting in time on site. They had two years to finish building the house. Then, as long as they lived in that home for five more years, the loan would be forgiven, and they would continue owning the house with no ongoing loan payments. In some cases, there were no mechanical systems provided as part of this package, but the residents would add to the house over the years. A lot of these units are still standing and still lived in today. Many of them are comparatively well-maintained in contrast with other types of housing—for example, public housing units. It’s also worth noting that the one-time cost of the materials package was—from the government’s perspective—only a fraction of the cost to build and maintain a public housing unit over its lifespan. At the time, it cost about to to build a HAP home, whereas the lifetime cost of a public housing unit is in the order of This program was considered very successful in many places, especially in Fort Good Hope. It created about 40% of their local housing stock at that time, which went from about 100 units to about 140. It’s a small community, so that’s quite significant.  What were the successful principles? The community-based decision-making power to allocate the funding. The sweat equity component, which brought homeownership within the range of being attainable for people—because there wasn’t cash needing to be transferred, when the cash wasn’t available. Local materials—they harvested the logs from the land, and the fact that residents could maintain the homes themselves. The Fort Good Hope Construction Centre. Rendering by Taylor Architecture Group The Fort Good Hope Construction Centre The HAP ended the same year that the federal government terminated new spending on social housing. By the late 1990s, the creation of new public housing stock or new homeownership units had gone down to negligible levels. But more recently, things started to change. The federal government started to release money to build affordable housing. Simultaneously, Indigenous governments are working towards Self-Government and settling their Land Claims. Federal funds have started to flow directly to Indigenous groups. Given these changes, the landscape of Northern housing has started to evolve. In 2016, Fort Good Hope created the K’asho Got’ine Housing Society, based on the precedent of the 1980s Fort Good Hope Housing Society. They said: “We did this before, maybe we can do it again.” The community incorporated a non-profit and came up with a five-year plan to meet housing need in their community. One thing the community did right away was start up a crew to deliver housing maintenance and repairs. This is being run by Ne’Rahten Developments Ltd., which is the business arm of Yamoga Land Corporation. Over the span of a few years, they built up a crew of skilled workers. Then Ne’Rahten started thinking, “Why can’t we do more? Why can’t we build our own housing?” They identified a need for a space where people could work year-round, and first get training, then employment, in a stable all-season environment. This was the initial vision for the Fort Good Hope Construction Centre, and this is where TAG got involved. We had some seed funding through the CMHC Housing Supply Challenge when we partnered with Fort Good Hope. We worked with the community for over a year to get the capital funding lined up for the project. This process required us to take on a different role than the one you typically would as an architect. It wasn’t just schematic-design-to-construction-administration. One thing we did pretty early on was a housing design workshop that was open to the whole community, to start understanding what type of housing people would really want to see. Another piece was a lot of outreach and advocacy to build up support for the project and partnerships—for example, with Housing Northwest Territories and Aurora College. We also reached out to our federal MP, the NWT Legislative Assembly and different MLAs, and we talked to a lot of different people about the link between employment and housing. The idea was that the Fort Good Hope Construction Centre would be a demonstration project. Ultimately, funding did come through for the project—from both CMHC and National Indigenous Housing Collaborative Inc. The facility itself will not be architecturally spectacular. It’s basically a big shed where you could build a modular house. But the idea is that the construction of those houses is combined with training, and it creates year-round indoor jobs. It intends to combat the short construction seasons, and the fact that people would otherwise be laid off between projects—which makes it very hard to progress with your training or your career. At the same time, the Construction Centre will build up a skilled labour force that otherwise wouldn’t exist—because when there’s no work, skilled people tend to leave the community. And, importantly, the idea is to keep capital funding in the community. So when there’s a new arena that needs to get built, when there’s a new school that needs to get built, you have a crew of people who are ready to take that on. Rather than flying in skilled labourers, you actually have the community doing it themselves. It’s working towards self-determination in housing too, because if those modular housing units are being built in the community, by community members, then eventually they’re taking over design decisions and decisions about maintenance—in a way that hasn’t really happened for decades. Transitional homeownership My research also looked at a transitional homeownership model that adapts some of the successful principles of the 1980s HAP. Right now, in non-market communities, there are serious gaps in the housing continuum—that is, the different types of housing options available to people. For the most part, you have public housing, and you have homelessness—mostly in the form of hidden homelessness, where people are sleeping on the couches of relatives. Then, in some cases, you have inherited homeownership—where people got homes through the HAP or some other government program. But for the most part, not a lot of people in non-market communities are actually moving into homeownership anymore. I asked the local housing manager in Fort Good Hope: “When’s the last time someone built a house in the community?” She said, “I can only think of one person. It was probably about 20 years ago, and that person actually went to the bank and got a mortgage. If people have a home, it’s usually inherited from their parents or from relatives.” And that situation is a bit of a problem in itself, because it means that people can’t move out of public housing. Public housing traps you in a lot of ways. For example, it punishes employment, because rent is geared to income. It’s been said many times that this model disincentivizes employment. I was in a workshop last year where an Indigenous person spoke up and said, “Actually, it’s not disincentivizing, it punishes employment. It takes things away from you.” Somebody at the territorial housing corporation in Yellowknife told me, “We have clients who are over the income threshold for public housing, but there’s nowhere else they can go.” Theoretically, they would go to the private housing market, they would go to market housing, or they would go to homeownership, but those options don’t exist or they aren’t within reach.  So the idea with the transitional homeownership model is to create an option that could allow the highest income earners in a non-market community to move towards homeownership. This could take some pressure off the public housing system. And it would almost be like a wealth distribution measure: people who are able to afford the cost of operating and maintaining a home then have that option, instead of remaining in government-subsidized housing. For those who cannot, the public housing system is still an option—and maybe a few more public housing units are freed up.  I’ve developed about 36 recommendations for a transitional homeownership model in northern non-market communities. The recommendations are meant to be actioned at various scales: at the scale of the individual household, the scale of the housing provider, and the scale of the whole community. The idea is that if you look at housing as part of a whole system, then there are certain moves that might make sense here—in a non-market context especially—that wouldn’t make sense elsewhere. So for example, we’re in a situation where a house doesn’t appreciate in value. It’s not a financial asset, it’s actually a financial liability, and it’s something that costs a lot to maintain over the years. Giving someone a house in a non-market community is actually giving them a burden, but some residents would be quite willing to take this on, just to have an option of getting out of public housing. It just takes a shift in mindset to start considering solutions for that kind of context. One particularly interesting feature of non-market communities is that they’re still functioning with a mixed economy: partially a subsistence-based or traditional economy, and partially a cash economy. I think that’s actually a strength that hasn’t been tapped into by territorial and federal policies. In the far North, in-kind and traditional economies are still very much a way of life. People subsidize their groceries with “country food,” which means food that was harvested from the land. And instead of paying for fuel tank refills in cash, many households in non-market communities are burning wood as their primary heat source. In communities south of the treeline, like Fort Good Hope, that wood is also harvested from the land. Despite there being no exchange of cash involved, these are critical economic activities—and they are also part of a sustainable, resilient economy grounded in local resources and traditional skills. This concept of the mixed economy could be tapped into as part of a housing model, by bringing back the idea of a ‘sweat equity’ contribution instead of a down payment—just like in the HAP. Contributing time and labour is still an economic exchange, but it bypasses the ‘cash’ part—the part that’s still hard to come by in a non-market community. Labour doesn’t have to be manual labour, either. There are all kinds of work that need to take place in a community: maybe taking training courses and working on projects at the Construction Centre, maybe helping out at the Band Office, or providing childcare services for other working parents—and so on. So it could be more inclusive than a model that focuses on manual labour. Another thing to highlight is a rent-to-own trial period. Not every client will be equipped to take on the burdens of homeownership. So you can give people a trial period. If it doesn’t work out and they can’t pay for operations and maintenance, they could continue renting without losing their home. Then it’s worth touching on some basic design principles for the homeownership units. In the North, the solutions that work are often the simplest—not the most technologically innovative. When you’re in a remote location, specialized replacement parts and specialized labour are both difficult to come by. And new technologies aren’t always designed for extreme climates—especially as we trend towards the digital. So rather than installing technologically complex, high-efficiency systems, it actually makes more sense to build something that people are comfortable with, familiar with, and willing to maintain. In a southern context, people suggest solutions like solar panels to manage energy loads. But in the North, the best thing you can do for energy is put a woodstove in the house. That’s something we’ve heard loud and clear in many communities. Even if people can’t afford to fill their fuel tank, they’re still able to keep chopping wood—or their neighbour is, or their brother, or their kid, and so on. It’s just a different way of looking at things and a way of bringing things back down to earth, back within reach of community members.  Regulatory barriers to housing access: Revisiting the National Building Code On that note, there’s one more project I’ll touch on briefly. TAG is working on a research study, funded by Housing, Infrastructure and Communities Canada, which looks at regulatory barriers to housing access in the North. The National Building Codehas evolved largely to serve the southern market context, where constraints and resources are both very different than they are up here. Technical solutions in the NBC are based on assumptions that, in some cases, simply don’t apply in northern communities. Here’s a very simple example: minimum distance to a fire hydrant. Most of our communities don’t have fire hydrants at all. We don’t have municipal services. The closest hydrant might be thousands of kilometres away. So what do we do instead? We just have different constraints to consider. That’s just one example but there are many more. We are looking closely at the NBC, and we are also working with a couple of different communities in different situations. The idea is to identify where there are conflicts between what’s regulated and what’s actually feasible, viable, and practical when it comes to on-the-ground realities. Then we’ll look at some alternative solutions for housing. The idea is to meet the intent of the NBC, but arrive at some technical solutions that are more practical to build, easier to maintain, and more appropriate for northern communities.  All of the projects I’ve just described are fairly recent, and very much still ongoing. We’ll see how it all plays out. I’m sure we’re going to run into a lot of new barriers and learn a lot more on the way, but it’s an incremental trial-and-error process. Even with the Construction Centre, we’re saying that this is a demonstration project, but how—or if—it rolls out in other communities would be totally community-dependent, and it could look very, very different from place to place.  In doing any research on Northern housing, one of the consistent findings is that there is no one-size-fits-all solution. Northern communities are not all the same. There are all kinds of different governance structures, different climates, ground conditions, transportation routes, different population sizes, different people, different cultures. Communities are Dene, Métis, Inuvialuit, as well as non-Indigenous, all with different ways of being. One-size-fits-all solutions don’t work—they never have. And the housing crisis is complex, and it’s difficult to unravel. So we’re trying to move forward with a few different approaches, maybe in a few different places, and we’re hoping that some communities, some organizations, or even some individual people, will see some positive impacts.  As appeared in the June 2025 issue of Canadian Architect magazine  The post Insites: Addressing the Northern housing crisis appeared first on Canadian Architect. #insites #addressing #northern #housing #crisis
    WWW.CANADIANARCHITECT.COM
    Insites: Addressing the Northern housing crisis
    The housing crisis in Canada’s North, which has particularly affected the majority Indigenous population in northern communities, has been of ongoing concern to firms such as Taylor Architecture Group (TAG). Formerly known as Pin/Taylor, the firm was established in Yellowknife in 1983. TAG’s Principal, Simon Taylor, says that despite recent political gains for First Nations, “by and large, life is not improving up here.” Taylor and his colleagues have designed many different types of housing across the North. But the problems exceed the normal scope of architectural practice. TAG’s Manager of Research and Development, Kristel Derkowski, says, “We can design the units well, but it doesn’t solve many of the underlying problems.” To respond, she says, “we’ve backed up the process to look at the root causes more.” As a result, “the design challenges are informed by much broader systemic research.”  We spoke to Derkowski about her research, and the work that Taylor Architecture Group is doing to act on it. Here’s what she has to say. Inadequate housing from the start The Northwest Territories is about 51% Indigenous. Most non-Indigenous people are concentrated in the capital city of Yellowknife. Outside of Yellowknife, the territory is very much majority Indigenous.  The federal government got involved in delivering housing to the far North in 1959. There were problems with this program right from the beginning. One issue was that when the houses were first delivered, they were designed and fabricated down south, and they were completely inadequate for the climate. The houses from that initial program were called “Matchbox houses” because they were so small. These early stages of housing delivery helped establish the precedent that a lower standard of housing was acceptable for northern Indigenous residents compared to Euro-Canadian residents elsewhere. In many cases, that double-standard persists to this day. The houses were also inappropriately designed for northern cultures. It’s been said in the research that the way that these houses were delivered to northern settlements was a significant factor in people being divorced from their traditional lifestyles, their traditional hierarchies, the way that they understood home. It was imposing a Euro-Canadian model on Indigenous communities and their ways of life.  Part of what the federal government was trying to do was to impose a cash economy and stimulate a market. They were delivering houses and asking for rent. But there weren’t a lot of opportunities to earn cash. This housing was delivered around the sites of former fur trading posts—but the fur trade had collapsed by 1930. There weren’t a lot of jobs. There wasn’t a lot of wage-based employment. And yet, rental payments were being collected in cash, and the rental payments increased significantly over the span of a couple decades.  The imposition of a cash economy created problems culturally. It’s been said that public housing delivery, in combination with other social policies, served to introduce the concept of poverty in the far North, where it hadn’t existed before. These policies created a situation where Indigenous northerners couldn’t afford to be adequately housed, because housing demanded cash, and cash wasn’t always available. That’s a big theme that continues to persist today. Most of the territory’s communities remain “non-market”: there is no housing market. There are different kinds of economies in the North—and not all of them revolve wholly around cash. And yet government policies do. The governments’ ideas about housing do, too. So there’s a conflict there.  The federal exit from social housing After 1969, the federal government devolved housing to the territorial government. The Government of Northwest Territories created the Northwest Territories Housing Corporation. By 1974, the housing corporation took over all the stock of federal housing and started to administer it, in addition to building their own. The housing corporation was rapidly building new housing stock from 1975 up until the mid-1990s. But beginning in the early 1990s, the federal government terminated federal spending on new social housing across the whole country. A couple of years after that, they also decided to allow operational agreements with social housing providers to expire. It didn’t happen that quickly—and maybe not everybody noticed, because it wasn’t a drastic change where all operational funding disappeared immediately. But at that time, the federal government was in 25- to 50-year operational agreements with various housing providers across the country. After 1995, these long-term operating agreements were no longer being renewed—not just in the North, but everywhere in Canada.  With the housing corporation up here, that change started in 1996, and we have until 2038 before the federal contribution of operational funding reaches zero. As a result, beginning in 1996, the number of units owned by the NWT Housing Corporation plateaued. There was a little bump in housing stock after that—another 200 units or so in the early 2000s. But basically, the Northwest Territories was stuck for 25 years, from 1996 to 2021, with the same number of public housing units. In 1990, there was a report on housing in the NWT that was funded by the Canada Mortgage and Housing Corporation (CMHC). That report noted that housing was already in a crisis state. At that time, in 1990, researchers said it would take 30 more years to meet existing housing need, if housing production continued at the current rate. The other problem is that houses were so inadequately constructed to begin with, that they generally needed replacement after 15 years. So housing in the Northwest Territories already had serious problems in 1990. Then in 1996, the housing corporation stopped building more. So if you compare the total number of social housing units with the total need for subsidized housing in the territory, you can see a severely widening gap in recent decades. We’ve seen a serious escalation in housing need. The Northwest Territories has a very, very small tax base, and it’s extremely expensive to provide services here. Most of our funding for public services comes from the federal government. The NWT on its own does not have a lot of buying power. So ever since the federal government stopped providing operational funding for housing, the territorial government has been hard-pressed to replace that funding with its own internal resources. I should probably note that this wasn’t only a problem for the Northwest Territories. Across Canada, we have seen mass homelessness visibly emerge since the ’90s. This is related, at least in part, to the federal government’s decisions to terminate funding for social housing at that time. Today’s housing crisis Getting to present-day conditions in the NWT, we now have some “market” communities and some “non-market” communities. There are 33 communities total in the NWT, and at least 27 of these don’t have a housing market: there’s no private rental market and there’s no resale market. This relates back to the conflict I mentioned before: the cash economy did not entirely take root. In simple terms, there isn’t enough local employment or income opportunity for a housing market—in conventional terms—to work.  Yellowknife is an outlier in the territory. Economic opportunity is concentrated in the capital city. We also have five other “market” communities that are regional centres for the territorial government, where more employment and economic activity take place. Across the non-market communities, on average, the rate of unsuitable or inadequate housing is about five times what it is elsewhere in Canada. Rates of unemployment are about five times what they are in Yellowknife. On top of this, the communities with the highest concentration of Indigenous residents also have the highest rates of unsuitable or inadequate housing, and also have the lowest income opportunity. These statistics clearly show that the inequalities in the territory are highly racialized.  Given the situation in non-market communities, there is a severe affordability crisis in terms of the cost to deliver housing. It’s very, very expensive to build housing here. A single detached home costs over a million dollars to build in a place like Fort Good Hope (Rádeyı̨lı̨kóé). We’re talking about a very modest three-bedroom house, smaller than what you’d typically build in the South. The million-dollar price tag on each house is a serious issue. Meanwhile, in a non-market community, the potential resale value is extremely low. So there’s a massive gap between the cost of construction and the value of the home once built—and that’s why you have no housing market. It means that private development is impossible. That’s why, until recently, only the federal and territorial governments have been building new homes in non-market communities. It’s so expensive to do, and as soon as the house is built, its value plummets.  The costs of living are also very high. According to the NWT Bureau of Statistics, the estimated living costs for an individual in Fort Good Hope are about 1.8 times what it costs to live in Edmonton. Then when it comes to housing specifically, there are further issues with operations and maintenance. The NWT is not tied into the North American hydro grid, and in most communities, electricity is produced by a diesel generator. This is extremely expensive. Everything needs to be shipped in, including fuel. So costs for heating fuel are high as well, as are the heating loads. Then, maintenance and repairs can be very difficult, and of course, very costly. If you need any specialized parts or specialized labour, you are flying those parts and those people in from down South. So to take on the costs of homeownership, on top of the costs of living—in a place where income opportunity is limited to begin with—this is extremely challenging. And from a statistical or systemic perspective, this is simply not in reach for most community members. In 2021, the NWT Housing Corporation underwent a strategic renewal and became Housing Northwest Territories. Their mandate went into a kind of flux. They started to pivot from being the primary landlord in the territory towards being a partner to other third-party housing providers, which might be Indigenous governments, community housing providers, nonprofits, municipalities. But those other organisations, in most cases, aren’t equipped or haven’t stepped forward to take on social housing. Even though the federal government is releasing capital funding for affordable housing again, northern communities can’t always capitalize on that, because the source of funding for operations remains in question. Housing in non-market communities essentially needs to be subsidized—not just in terms of construction, but also in terms of operations. But that operational funding is no longer available. I can’t stress enough how critical this issue is for the North. Fort Good Hope and “one thing that (kind of) worked” I’ll talk a bit about Fort Good Hope. I don’t want to be speaking on behalf of the community here, but I will share a bit about the realities on the ground, as a way of putting things into context.  Fort Good Hope, or Rádeyı̨lı̨kóé, is on the Mackenzie River, close to the Arctic Circle. There’s a winter road that’s open at best from January until March—the window is getting narrower because of climate change. There were also barges running each summer for material transportation, but those have been cancelled for the past two years because of droughts linked to climate change. Aside from that, it’s a fly-in community. It’s very remote. It has about 500-600 people. According to census data, less than half of those people live in what’s considered acceptable housing.  The biggest problem is housing adequacy. That’s CMHC’s term for housing in need of major repairs. This applies to about 36% of households in Fort Good Hope. In terms of ownership, almost 40% of the community’s housing stock is managed by Housing NWT. That’s a combination of public housing units and market housing units—which are for professionals like teachers and nurses. There’s also a pretty high percentage of owner-occupied units—about 46%.  The story told by the community is that when public housing arrived in the 1960s, the people were living in owner-built log homes. Federal agents arrived and they considered some of those homes to be inadequate or unacceptable, and they bulldozed those homes, then replaced some of them—but maybe not all—with public housing units. Then residents had no choice but to rent from the people who took their homes away. This was not a good way to start up a public housing system. The state of housing in Fort Good Hope Then there was an issue with the rental rates, which drastically increased over time. During a presentation to a government committee in the ’80s, a community member explained that they had initially accepted a place in public housing for a rental fee of $2 a month in 1971. By 1984, the same community member was expected to pay $267 a month. That might not sound like much in today’s terms, but it was roughly a 13,000% increase for that same tenant—and it’s not like they had any other housing options to choose from. So by that point, they’re stuck with paying whatever is asked.  On top of that, the housing units were poorly built and rapidly deteriorated. One description from that era said the walls were four inches thick, with windows oriented north, and water tanks that froze in the winter and fell through the floor. The single heating source was right next to the only door—residents were concerned about the fire hazard that obviously created. Ultimately the community said: “We don’t actually want any more public housing units. We want to go back to homeownership, which was what we had before.”  So Fort Good Hope was a leader in housing at that time and continues to be to this day. The community approached the territorial government and made a proposal: “Give us the block funding for home construction, we’ll administer it ourselves, we’ll help people build houses, and they can keep them.” That actually worked really well. That was the start of the Homeownership Assistance Program (HAP) that ran for about ten years, beginning in 1982. The program expanded across the whole territory after it was piloted in Fort Good Hope. The HAP is still spoken about and written about as the one thing that kind of worked.  Self-built log cabins remain from Fort Good Hope’s 1980s Homeownership Program (HAP). Funding was cost-shared between the federal and territorial governments. Through the program, material packages were purchased for clients who were deemed eligible. The client would then contribute their own sweat equity in the form of hauling logs and putting in time on site. They had two years to finish building the house. Then, as long as they lived in that home for five more years, the loan would be forgiven, and they would continue owning the house with no ongoing loan payments. In some cases, there were no mechanical systems provided as part of this package, but the residents would add to the house over the years. A lot of these units are still standing and still lived in today. Many of them are comparatively well-maintained in contrast with other types of housing—for example, public housing units. It’s also worth noting that the one-time cost of the materials package was—from the government’s perspective—only a fraction of the cost to build and maintain a public housing unit over its lifespan. At the time, it cost about $50,000 to $80,000 to build a HAP home, whereas the lifetime cost of a public housing unit is in the order of $2,000,000. This program was considered very successful in many places, especially in Fort Good Hope. It created about 40% of their local housing stock at that time, which went from about 100 units to about 140. It’s a small community, so that’s quite significant.  What were the successful principles? The community-based decision-making power to allocate the funding. The sweat equity component, which brought homeownership within the range of being attainable for people—because there wasn’t cash needing to be transferred, when the cash wasn’t available. Local materials—they harvested the logs from the land, and the fact that residents could maintain the homes themselves. The Fort Good Hope Construction Centre. Rendering by Taylor Architecture Group The Fort Good Hope Construction Centre The HAP ended the same year that the federal government terminated new spending on social housing. By the late 1990s, the creation of new public housing stock or new homeownership units had gone down to negligible levels. But more recently, things started to change. The federal government started to release money to build affordable housing. Simultaneously, Indigenous governments are working towards Self-Government and settling their Land Claims. Federal funds have started to flow directly to Indigenous groups. Given these changes, the landscape of Northern housing has started to evolve. In 2016, Fort Good Hope created the K’asho Got’ine Housing Society, based on the precedent of the 1980s Fort Good Hope Housing Society. They said: “We did this before, maybe we can do it again.” The community incorporated a non-profit and came up with a five-year plan to meet housing need in their community. One thing the community did right away was start up a crew to deliver housing maintenance and repairs. This is being run by Ne’Rahten Developments Ltd., which is the business arm of Yamoga Land Corporation (the local Indigenous Government). Over the span of a few years, they built up a crew of skilled workers. Then Ne’Rahten started thinking, “Why can’t we do more? Why can’t we build our own housing?” They identified a need for a space where people could work year-round, and first get training, then employment, in a stable all-season environment. This was the initial vision for the Fort Good Hope Construction Centre, and this is where TAG got involved. We had some seed funding through the CMHC Housing Supply Challenge when we partnered with Fort Good Hope. We worked with the community for over a year to get the capital funding lined up for the project. This process required us to take on a different role than the one you typically would as an architect. It wasn’t just schematic-design-to-construction-administration. One thing we did pretty early on was a housing design workshop that was open to the whole community, to start understanding what type of housing people would really want to see. Another piece was a lot of outreach and advocacy to build up support for the project and partnerships—for example, with Housing Northwest Territories and Aurora College. We also reached out to our federal MP, the NWT Legislative Assembly and different MLAs, and we talked to a lot of different people about the link between employment and housing. The idea was that the Fort Good Hope Construction Centre would be a demonstration project. Ultimately, funding did come through for the project—from both CMHC and National Indigenous Housing Collaborative Inc. The facility itself will not be architecturally spectacular. It’s basically a big shed where you could build a modular house. But the idea is that the construction of those houses is combined with training, and it creates year-round indoor jobs. It intends to combat the short construction seasons, and the fact that people would otherwise be laid off between projects—which makes it very hard to progress with your training or your career. At the same time, the Construction Centre will build up a skilled labour force that otherwise wouldn’t exist—because when there’s no work, skilled people tend to leave the community. And, importantly, the idea is to keep capital funding in the community. So when there’s a new arena that needs to get built, when there’s a new school that needs to get built, you have a crew of people who are ready to take that on. Rather than flying in skilled labourers, you actually have the community doing it themselves. It’s working towards self-determination in housing too, because if those modular housing units are being built in the community, by community members, then eventually they’re taking over design decisions and decisions about maintenance—in a way that hasn’t really happened for decades. Transitional homeownership My research also looked at a transitional homeownership model that adapts some of the successful principles of the 1980s HAP. Right now, in non-market communities, there are serious gaps in the housing continuum—that is, the different types of housing options available to people. For the most part, you have public housing, and you have homelessness—mostly in the form of hidden homelessness, where people are sleeping on the couches of relatives. Then, in some cases, you have inherited homeownership—where people got homes through the HAP or some other government program. But for the most part, not a lot of people in non-market communities are actually moving into homeownership anymore. I asked the local housing manager in Fort Good Hope: “When’s the last time someone built a house in the community?” She said, “I can only think of one person. It was probably about 20 years ago, and that person actually went to the bank and got a mortgage. If people have a home, it’s usually inherited from their parents or from relatives.” And that situation is a bit of a problem in itself, because it means that people can’t move out of public housing. Public housing traps you in a lot of ways. For example, it punishes employment, because rent is geared to income. It’s been said many times that this model disincentivizes employment. I was in a workshop last year where an Indigenous person spoke up and said, “Actually, it’s not disincentivizing, it punishes employment. It takes things away from you.” Somebody at the territorial housing corporation in Yellowknife told me, “We have clients who are over the income threshold for public housing, but there’s nowhere else they can go.” Theoretically, they would go to the private housing market, they would go to market housing, or they would go to homeownership, but those options don’t exist or they aren’t within reach.  So the idea with the transitional homeownership model is to create an option that could allow the highest income earners in a non-market community to move towards homeownership. This could take some pressure off the public housing system. And it would almost be like a wealth distribution measure: people who are able to afford the cost of operating and maintaining a home then have that option, instead of remaining in government-subsidized housing. For those who cannot, the public housing system is still an option—and maybe a few more public housing units are freed up.  I’ve developed about 36 recommendations for a transitional homeownership model in northern non-market communities. The recommendations are meant to be actioned at various scales: at the scale of the individual household, the scale of the housing provider, and the scale of the whole community. The idea is that if you look at housing as part of a whole system, then there are certain moves that might make sense here—in a non-market context especially—that wouldn’t make sense elsewhere. So for example, we’re in a situation where a house doesn’t appreciate in value. It’s not a financial asset, it’s actually a financial liability, and it’s something that costs a lot to maintain over the years. Giving someone a house in a non-market community is actually giving them a burden, but some residents would be quite willing to take this on, just to have an option of getting out of public housing. It just takes a shift in mindset to start considering solutions for that kind of context. One particularly interesting feature of non-market communities is that they’re still functioning with a mixed economy: partially a subsistence-based or traditional economy, and partially a cash economy. I think that’s actually a strength that hasn’t been tapped into by territorial and federal policies. In the far North, in-kind and traditional economies are still very much a way of life. People subsidize their groceries with “country food,” which means food that was harvested from the land. And instead of paying for fuel tank refills in cash, many households in non-market communities are burning wood as their primary heat source. In communities south of the treeline, like Fort Good Hope, that wood is also harvested from the land. Despite there being no exchange of cash involved, these are critical economic activities—and they are also part of a sustainable, resilient economy grounded in local resources and traditional skills. This concept of the mixed economy could be tapped into as part of a housing model, by bringing back the idea of a ‘sweat equity’ contribution instead of a down payment—just like in the HAP. Contributing time and labour is still an economic exchange, but it bypasses the ‘cash’ part—the part that’s still hard to come by in a non-market community. Labour doesn’t have to be manual labour, either. There are all kinds of work that need to take place in a community: maybe taking training courses and working on projects at the Construction Centre, maybe helping out at the Band Office, or providing childcare services for other working parents—and so on. So it could be more inclusive than a model that focuses on manual labour. Another thing to highlight is a rent-to-own trial period. Not every client will be equipped to take on the burdens of homeownership. So you can give people a trial period. If it doesn’t work out and they can’t pay for operations and maintenance, they could continue renting without losing their home. Then it’s worth touching on some basic design principles for the homeownership units. In the North, the solutions that work are often the simplest—not the most technologically innovative. When you’re in a remote location, specialized replacement parts and specialized labour are both difficult to come by. And new technologies aren’t always designed for extreme climates—especially as we trend towards the digital. So rather than installing technologically complex, high-efficiency systems, it actually makes more sense to build something that people are comfortable with, familiar with, and willing to maintain. In a southern context, people suggest solutions like solar panels to manage energy loads. But in the North, the best thing you can do for energy is put a woodstove in the house. That’s something we’ve heard loud and clear in many communities. Even if people can’t afford to fill their fuel tank, they’re still able to keep chopping wood—or their neighbour is, or their brother, or their kid, and so on. It’s just a different way of looking at things and a way of bringing things back down to earth, back within reach of community members.  Regulatory barriers to housing access: Revisiting the National Building Code On that note, there’s one more project I’ll touch on briefly. TAG is working on a research study, funded by Housing, Infrastructure and Communities Canada, which looks at regulatory barriers to housing access in the North. The National Building Code (NBC) has evolved largely to serve the southern market context, where constraints and resources are both very different than they are up here. Technical solutions in the NBC are based on assumptions that, in some cases, simply don’t apply in northern communities. Here’s a very simple example: minimum distance to a fire hydrant. Most of our communities don’t have fire hydrants at all. We don’t have municipal services. The closest hydrant might be thousands of kilometres away. So what do we do instead? We just have different constraints to consider. That’s just one example but there are many more. We are looking closely at the NBC, and we are also working with a couple of different communities in different situations. The idea is to identify where there are conflicts between what’s regulated and what’s actually feasible, viable, and practical when it comes to on-the-ground realities. Then we’ll look at some alternative solutions for housing. The idea is to meet the intent of the NBC, but arrive at some technical solutions that are more practical to build, easier to maintain, and more appropriate for northern communities.  All of the projects I’ve just described are fairly recent, and very much still ongoing. We’ll see how it all plays out. I’m sure we’re going to run into a lot of new barriers and learn a lot more on the way, but it’s an incremental trial-and-error process. Even with the Construction Centre, we’re saying that this is a demonstration project, but how—or if—it rolls out in other communities would be totally community-dependent, and it could look very, very different from place to place.  In doing any research on Northern housing, one of the consistent findings is that there is no one-size-fits-all solution. Northern communities are not all the same. There are all kinds of different governance structures, different climates, ground conditions, transportation routes, different population sizes, different people, different cultures. Communities are Dene, Métis, Inuvialuit, as well as non-Indigenous, all with different ways of being. One-size-fits-all solutions don’t work—they never have. And the housing crisis is complex, and it’s difficult to unravel. So we’re trying to move forward with a few different approaches, maybe in a few different places, and we’re hoping that some communities, some organizations, or even some individual people, will see some positive impacts.  As appeared in the June 2025 issue of Canadian Architect magazine  The post Insites: Addressing the Northern housing crisis appeared first on Canadian Architect.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Meta splits its AI division into two

    Metais restructuring its AI division into two distinct units, AI Products and AGI Foundations, marking its most significant internal overhaul as it races to compete with OpenAI and Google.

    The move, detailed in an internal memo from Chief Product Officer Chris Cox and reported by Axios, appoints Connor Hayes to lead AI product integration while Ahmad Al-Dahle and Amir Frenkel will co-direct long-term AGI research.

    The restructuring comes amid mounting crises, including the delayed Llama 4 Behemoth model and the departure of key Llama architects to competitors like Mistral AI. This marks Meta’s second major AI reorganization since CEO Mark Zuckerberg’s 2023 attempt to “turbocharge” generative AI efforts, which saw the company fall further behind rivals despite early promise.

    “Structural changes alone won’t solve Meta’s AI challenges,” said Amandeep Singh, practice director at QKS Group. “While the new AGI Foundations unit creates clarity, retaining elite talent requires a seamless pipeline from research to real-world deployment. Meta has struggled with fragmented pipelines and unclear priorities,” added Singh.

    Talent exodus and technical setbacks

    The restructuring follows talent losses that have exposed fundamental weaknesses in Meta’s AI strategy. Only three authors remain from the original 14-person Llama research team, according to Business Insider. Internal surveys cited by The Information reveal plummeting morale in Meta’s AI division, where employees cite resource constraints and sluggish progress.

    “Talent follows momentum, and right now, momentum lives where research decisions directly shape deployed capabilities,” noted Singh.

    This talent drain coincides with technical setbacks, most notably the underperforming Llama 4 model, which has struggled with reasoning and mathematical tasks. These combined challenges have left Meta playing catch-up in the race toward artificial general intelligence, despite its early open-source advantages.

    The company’s strategy, bolstered by initiatives such as Llama for Startups and the recent Llama API launch, aims to attract developers and differentiate it from competitors’ proprietary models. But analysts caution these initiatives alone may not be enough to win enterprise trust.

    The enterprise adoption dilemma

    While Llama’s cost advantages remain attractive to businesses, growing concerns about its governance controls and the looming copyright lawsuit over training data are giving enterprises pause.

    “Companies love Llama’s affordability but expanding safety gaps and legal risks are becoming hard to ignore,” Singh said. “For mission-critical applications, many will ultimately choose more reliable, if more expensive, options like GPT or Gemini.”

    These cost benefits can lose their appeal when weighed against operational risks. Meta’s reorganization attempts to mitigate these concerns through specialized teams, one deploying generative AI across products, another advancing AGI research, but analysts remain skeptical about whether structural changes alone can solve deeper issues.

    Unlike Microsoft’s turnkey OpenAI integration or Google’s enterprise-ready Vertex AI platform, Meta lacks both the sales infrastructure and compliance pedigree for regulated industries. As Singh argued, “Enterprise AI adoption hinges on proven compliance frameworks, operational reliability, and mature support systems. Meta still needs to build that trust at Fortune 500 scale.”

    Meta’s race to close the AGI gap

    “Meta’s AGI push focused on models with reasoning, multimedia, and voice capabilities aligns with the broader industry trend toward multimodal AI as a catalyst for enterprise transformation,” said Surjyadeb Goswami, research director for AI and Automation at IDC Asia Pacific. He noted that open-source models are critical to enabling cost-effective, transparent, and customizable deployments, especially as organizations deepen their GenAI investments.

    For Meta to truly capitalize on this opportunity and succeed in its AGI bid, it must rebuild trust especially for enterprise adoption. Singh highlighted the need for Linux-like community stewardship, OpenAI-level safety protocols, and robust enterprise tooling. “Balancing openness with responsibility is Meta’s real challenge, especially as models approach general-purpose cognitive capability.”

    Meta now needs to show this reorganization yields meaningful improvements in model performance, talent retention, and enterprise adoption to validate its new approach. “Meta’s open-source vision is bold, but execution is everything,” Singh concluded.
    #meta #splits #its #division #into
    Meta splits its AI division into two
    Metais restructuring its AI division into two distinct units, AI Products and AGI Foundations, marking its most significant internal overhaul as it races to compete with OpenAI and Google. The move, detailed in an internal memo from Chief Product Officer Chris Cox and reported by Axios, appoints Connor Hayes to lead AI product integration while Ahmad Al-Dahle and Amir Frenkel will co-direct long-term AGI research. The restructuring comes amid mounting crises, including the delayed Llama 4 Behemoth model and the departure of key Llama architects to competitors like Mistral AI. This marks Meta’s second major AI reorganization since CEO Mark Zuckerberg’s 2023 attempt to “turbocharge” generative AI efforts, which saw the company fall further behind rivals despite early promise. “Structural changes alone won’t solve Meta’s AI challenges,” said Amandeep Singh, practice director at QKS Group. “While the new AGI Foundations unit creates clarity, retaining elite talent requires a seamless pipeline from research to real-world deployment. Meta has struggled with fragmented pipelines and unclear priorities,” added Singh. Talent exodus and technical setbacks The restructuring follows talent losses that have exposed fundamental weaknesses in Meta’s AI strategy. Only three authors remain from the original 14-person Llama research team, according to Business Insider. Internal surveys cited by The Information reveal plummeting morale in Meta’s AI division, where employees cite resource constraints and sluggish progress. “Talent follows momentum, and right now, momentum lives where research decisions directly shape deployed capabilities,” noted Singh. This talent drain coincides with technical setbacks, most notably the underperforming Llama 4 model, which has struggled with reasoning and mathematical tasks. These combined challenges have left Meta playing catch-up in the race toward artificial general intelligence, despite its early open-source advantages. The company’s strategy, bolstered by initiatives such as Llama for Startups and the recent Llama API launch, aims to attract developers and differentiate it from competitors’ proprietary models. But analysts caution these initiatives alone may not be enough to win enterprise trust. The enterprise adoption dilemma While Llama’s cost advantages remain attractive to businesses, growing concerns about its governance controls and the looming copyright lawsuit over training data are giving enterprises pause. “Companies love Llama’s affordability but expanding safety gaps and legal risks are becoming hard to ignore,” Singh said. “For mission-critical applications, many will ultimately choose more reliable, if more expensive, options like GPT or Gemini.” These cost benefits can lose their appeal when weighed against operational risks. Meta’s reorganization attempts to mitigate these concerns through specialized teams, one deploying generative AI across products, another advancing AGI research, but analysts remain skeptical about whether structural changes alone can solve deeper issues. Unlike Microsoft’s turnkey OpenAI integration or Google’s enterprise-ready Vertex AI platform, Meta lacks both the sales infrastructure and compliance pedigree for regulated industries. As Singh argued, “Enterprise AI adoption hinges on proven compliance frameworks, operational reliability, and mature support systems. Meta still needs to build that trust at Fortune 500 scale.” Meta’s race to close the AGI gap “Meta’s AGI push focused on models with reasoning, multimedia, and voice capabilities aligns with the broader industry trend toward multimodal AI as a catalyst for enterprise transformation,” said Surjyadeb Goswami, research director for AI and Automation at IDC Asia Pacific. He noted that open-source models are critical to enabling cost-effective, transparent, and customizable deployments, especially as organizations deepen their GenAI investments. For Meta to truly capitalize on this opportunity and succeed in its AGI bid, it must rebuild trust especially for enterprise adoption. Singh highlighted the need for Linux-like community stewardship, OpenAI-level safety protocols, and robust enterprise tooling. “Balancing openness with responsibility is Meta’s real challenge, especially as models approach general-purpose cognitive capability.” Meta now needs to show this reorganization yields meaningful improvements in model performance, talent retention, and enterprise adoption to validate its new approach. “Meta’s open-source vision is bold, but execution is everything,” Singh concluded. #meta #splits #its #division #into
    WWW.COMPUTERWORLD.COM
    Meta splits its AI division into two
    Meta (Nasdaq:META) is restructuring its AI division into two distinct units, AI Products and AGI Foundations, marking its most significant internal overhaul as it races to compete with OpenAI and Google. The move, detailed in an internal memo from Chief Product Officer Chris Cox and reported by Axios, appoints Connor Hayes to lead AI product integration while Ahmad Al-Dahle and Amir Frenkel will co-direct long-term AGI research. The restructuring comes amid mounting crises, including the delayed Llama 4 Behemoth model and the departure of key Llama architects to competitors like Mistral AI. This marks Meta’s second major AI reorganization since CEO Mark Zuckerberg’s 2023 attempt to “turbocharge” generative AI efforts, which saw the company fall further behind rivals despite early promise. “Structural changes alone won’t solve Meta’s AI challenges,” said Amandeep Singh, practice director at QKS Group. “While the new AGI Foundations unit creates clarity, retaining elite talent requires a seamless pipeline from research to real-world deployment. Meta has struggled with fragmented pipelines and unclear priorities,” added Singh. Talent exodus and technical setbacks The restructuring follows talent losses that have exposed fundamental weaknesses in Meta’s AI strategy. Only three authors remain from the original 14-person Llama research team, according to Business Insider. Internal surveys cited by The Information reveal plummeting morale in Meta’s AI division, where employees cite resource constraints and sluggish progress. “Talent follows momentum, and right now, momentum lives where research decisions directly shape deployed capabilities,” noted Singh. This talent drain coincides with technical setbacks, most notably the underperforming Llama 4 model, which has struggled with reasoning and mathematical tasks. These combined challenges have left Meta playing catch-up in the race toward artificial general intelligence, despite its early open-source advantages. The company’s strategy, bolstered by initiatives such as Llama for Startups and the recent Llama API launch, aims to attract developers and differentiate it from competitors’ proprietary models. But analysts caution these initiatives alone may not be enough to win enterprise trust. The enterprise adoption dilemma While Llama’s cost advantages remain attractive to businesses, growing concerns about its governance controls and the looming copyright lawsuit over training data are giving enterprises pause. “Companies love Llama’s affordability but expanding safety gaps and legal risks are becoming hard to ignore,” Singh said. “For mission-critical applications, many will ultimately choose more reliable, if more expensive, options like GPT or Gemini.” These cost benefits can lose their appeal when weighed against operational risks. Meta’s reorganization attempts to mitigate these concerns through specialized teams, one deploying generative AI across products, another advancing AGI research, but analysts remain skeptical about whether structural changes alone can solve deeper issues. Unlike Microsoft’s turnkey OpenAI integration or Google’s enterprise-ready Vertex AI platform, Meta lacks both the sales infrastructure and compliance pedigree for regulated industries. As Singh argued, “Enterprise AI adoption hinges on proven compliance frameworks, operational reliability, and mature support systems. Meta still needs to build that trust at Fortune 500 scale.” Meta’s race to close the AGI gap “Meta’s AGI push focused on models with reasoning, multimedia, and voice capabilities aligns with the broader industry trend toward multimodal AI as a catalyst for enterprise transformation,” said Surjyadeb Goswami, research director for AI and Automation at IDC Asia Pacific. He noted that open-source models are critical to enabling cost-effective, transparent, and customizable deployments, especially as organizations deepen their GenAI investments. For Meta to truly capitalize on this opportunity and succeed in its AGI bid, it must rebuild trust especially for enterprise adoption. Singh highlighted the need for Linux-like community stewardship, OpenAI-level safety protocols, and robust enterprise tooling. “Balancing openness with responsibility is Meta’s real challenge, especially as models approach general-purpose cognitive capability.” Meta now needs to show this reorganization yields meaningful improvements in model performance, talent retention, and enterprise adoption to validate its new approach. “Meta’s open-source vision is bold, but execution is everything,” Singh concluded.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model

    Yann LeCun, Meta's chief AI scientist, talks about AI regulation.

    FABRICE COFFRINI / AFP via Getty Images

    2025-05-31T21:40:40Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation.
    LeCun thinks open-source AI should be an international resource.
    Countries must ensure they are not "impeding open source platforms," he said.

    AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing."

    Recommended video
    #meta039s #chief #scientist #says #all
    Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model
    Yann LeCun, Meta's chief AI scientist, talks about AI regulation. FABRICE COFFRINI / AFP via Getty Images 2025-05-31T21:40:40Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation. LeCun thinks open-source AI should be an international resource. Countries must ensure they are not "impeding open source platforms," he said. AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing." Recommended video #meta039s #chief #scientist #says #all
    WWW.BUSINESSINSIDER.COM
    Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model
    Yann LeCun, Meta's chief AI scientist, talks about AI regulation. FABRICE COFFRINI / AFP via Getty Images 2025-05-31T21:40:40Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation. LeCun thinks open-source AI should be an international resource. Countries must ensure they are not "impeding open source platforms," he said. AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing." Recommended video
    0 Commentarii 0 Distribuiri 0 previzualizare
  • New Bambu Labs Update after Reported Problems

    3D printer manufacturer Bambu Lab has issued a new update after an early fix was withdrawn. Termed a critical calibration bug, the company has acted swiftly to deliver new code to its many users.
    The Shenzhen-based company has now released firmware version V01.01.02.07 for its H2D 3D printer through its Public Beta Program. Rolled out on May 23, this update introduces a comprehensive set of new features, performance enhancements, and critical bug fixes designed to elevate print quality, expand hardware compatibility, and offer users greater control. The release builds on feedback gathered from earlier beta phases.
    The Bambu Lab H2D Laser Full Combo in a workshop. Image via Bambu Lab.
    Features and Improvements
    Firmware V01.01.02.07 adds native support for the CyberBrick time-lapse kit. It also expands the H2D’s onboard AI failure detection system, now giving users the ability to individually toggle detection functions for nozzle clumping, spaghetti printing, air printing, and purge chute pile-ups from the printer’s interface.
    Hardware compatibility has been further extended. The AMS 2 Pro and AMS HT systems now support RFID-based automatic matching of drying parameters and can perform drying operations without rotating spools. Additionally, the Laser & Cut module can now initiate tasks directly from USB drive files, improving workflow support.
    Performance updates include improved foreign object detection on the smooth PEI plate, better regulation of heatbed temperatures, enhanced first-layer quality, more reliable chamber temperature checks before printing begins, and improved accuracy of laser module flame detection. The update also enhances the accuracy of nozzle clumping and nozzle camera dirty detection, while optimizing the pre-purging strategy.
    A collision issue between the nozzle flow blocker and nozzle wiper—previously triggered during flow dynamics calibration—has been resolved. Calibration reliability for the liveview camera has also improved, and issues with pre-extrusion lines sticking to prints during layer transitions have been addressed.
    Bambu Lab H2D Launch. Image via Bambu Lab.
    However, two known issues remain in this beta release: detection of filament PTFE tube detachment is currently disabled, and users cannot adjust heatbed temperature via the Bambu Handy app. The latter is expected to be fixed in a future app update.
    This version replaces V01.01.02.04, which was briefly released on May 20 before being withdrawn due to a critical calibration bug. That earlier version caused the right nozzle to crash into the wiper during left-nozzle calibration, damaging the printer. The firmware also temporarily disabled filament detachment detection. Bambu Lab quickly pulled the update and advised users to revert to the previous stable firmware while working on a corrected release—now realized in version V01.01.02.07.
    Accessing the Firmware
    To access the beta firmware, users can opt into the Public Beta Program through the Bambu Handy app by navigating to the “Me” section and selecting “Beta Firmware Program.” Once enrolled, the update will be rolled out gradually. Participants can leave the program at any time and revert to the most recent stable firmware version. Bambu Lab recommends updating Bambu Studio Presets before installing the firmware to ensure full compatibility. Full technical documentation and the official changelog are available on Bambu Lab’s website.
    Bambu Lab Hardware Line: H2D and Beyond
    The new firmware update applies to the H2D 3D printer, Bambu Lab’s flagship desktop manufacturing system unveiled in March 2025. Designed for professional users, the H2D offers the company’s largest build volume to date—350 x 320 x 325 mm—and includes two new AMS systems with integrated filament drying. Dual-nozzle extrusion and servo-driven precision deliver high accuracy, while a 350°C hotend and 65°C heated chamber allow reliable printing with high-performance, fiber-reinforced materials. With a toolhead speed of up to 1000 mm/s and acceleration of 20,000 mm/s², the H2D is built for productivity without compromising quality.
    The Bambu Lab H2D’s digital cutter. Image via Bambu Lab.
    Bambu Lab’s broader portfolio also includes the X1E, released in 2023 as an enterprise-grade upgrade to its X1 series. Developed with professional and engineering applications in mind, the X1E features LAN-only connectivity for secure, offline operation, enhanced air filtration, and precise thermal regulation. An increased maximum nozzle temperature expands its material compatibility, making it suitable for demanding industrial applications. At its core, the X1E builds on the proven performance of the X1 Carbon, extending the system’s capabilities for use in sensitive or regulated environments.
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows Bambu Lab H2D Launch. Image via Bambu Lab.

    Paloma Duran
    Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    #new #bambu #labs #update #after
    New Bambu Labs Update after Reported Problems
    3D printer manufacturer Bambu Lab has issued a new update after an early fix was withdrawn. Termed a critical calibration bug, the company has acted swiftly to deliver new code to its many users. The Shenzhen-based company has now released firmware version V01.01.02.07 for its H2D 3D printer through its Public Beta Program. Rolled out on May 23, this update introduces a comprehensive set of new features, performance enhancements, and critical bug fixes designed to elevate print quality, expand hardware compatibility, and offer users greater control. The release builds on feedback gathered from earlier beta phases. The Bambu Lab H2D Laser Full Combo in a workshop. Image via Bambu Lab. Features and Improvements Firmware V01.01.02.07 adds native support for the CyberBrick time-lapse kit. It also expands the H2D’s onboard AI failure detection system, now giving users the ability to individually toggle detection functions for nozzle clumping, spaghetti printing, air printing, and purge chute pile-ups from the printer’s interface. Hardware compatibility has been further extended. The AMS 2 Pro and AMS HT systems now support RFID-based automatic matching of drying parameters and can perform drying operations without rotating spools. Additionally, the Laser & Cut module can now initiate tasks directly from USB drive files, improving workflow support. Performance updates include improved foreign object detection on the smooth PEI plate, better regulation of heatbed temperatures, enhanced first-layer quality, more reliable chamber temperature checks before printing begins, and improved accuracy of laser module flame detection. The update also enhances the accuracy of nozzle clumping and nozzle camera dirty detection, while optimizing the pre-purging strategy. A collision issue between the nozzle flow blocker and nozzle wiper—previously triggered during flow dynamics calibration—has been resolved. Calibration reliability for the liveview camera has also improved, and issues with pre-extrusion lines sticking to prints during layer transitions have been addressed. Bambu Lab H2D Launch. Image via Bambu Lab. However, two known issues remain in this beta release: detection of filament PTFE tube detachment is currently disabled, and users cannot adjust heatbed temperature via the Bambu Handy app. The latter is expected to be fixed in a future app update. This version replaces V01.01.02.04, which was briefly released on May 20 before being withdrawn due to a critical calibration bug. That earlier version caused the right nozzle to crash into the wiper during left-nozzle calibration, damaging the printer. The firmware also temporarily disabled filament detachment detection. Bambu Lab quickly pulled the update and advised users to revert to the previous stable firmware while working on a corrected release—now realized in version V01.01.02.07. Accessing the Firmware To access the beta firmware, users can opt into the Public Beta Program through the Bambu Handy app by navigating to the “Me” section and selecting “Beta Firmware Program.” Once enrolled, the update will be rolled out gradually. Participants can leave the program at any time and revert to the most recent stable firmware version. Bambu Lab recommends updating Bambu Studio Presets before installing the firmware to ensure full compatibility. Full technical documentation and the official changelog are available on Bambu Lab’s website. Bambu Lab Hardware Line: H2D and Beyond The new firmware update applies to the H2D 3D printer, Bambu Lab’s flagship desktop manufacturing system unveiled in March 2025. Designed for professional users, the H2D offers the company’s largest build volume to date—350 x 320 x 325 mm—and includes two new AMS systems with integrated filament drying. Dual-nozzle extrusion and servo-driven precision deliver high accuracy, while a 350°C hotend and 65°C heated chamber allow reliable printing with high-performance, fiber-reinforced materials. With a toolhead speed of up to 1000 mm/s and acceleration of 20,000 mm/s², the H2D is built for productivity without compromising quality. The Bambu Lab H2D’s digital cutter. Image via Bambu Lab. Bambu Lab’s broader portfolio also includes the X1E, released in 2023 as an enterprise-grade upgrade to its X1 series. Developed with professional and engineering applications in mind, the X1E features LAN-only connectivity for secure, offline operation, enhanced air filtration, and precise thermal regulation. An increased maximum nozzle temperature expands its material compatibility, making it suitable for demanding industrial applications. At its core, the X1E builds on the proven performance of the X1 Carbon, extending the system’s capabilities for use in sensitive or regulated environments. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Who won the 2024 3D Printing Industry Awards? Subscribe to the3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Bambu Lab H2D Launch. Image via Bambu Lab. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future. #new #bambu #labs #update #after
    3DPRINTINGINDUSTRY.COM
    New Bambu Labs Update after Reported Problems
    3D printer manufacturer Bambu Lab has issued a new update after an early fix was withdrawn. Termed a critical calibration bug, the company has acted swiftly to deliver new code to its many users. The Shenzhen-based company has now released firmware version V01.01.02.07 for its H2D 3D printer through its Public Beta Program. Rolled out on May 23, this update introduces a comprehensive set of new features, performance enhancements, and critical bug fixes designed to elevate print quality, expand hardware compatibility, and offer users greater control. The release builds on feedback gathered from earlier beta phases. The Bambu Lab H2D Laser Full Combo in a workshop. Image via Bambu Lab. Features and Improvements Firmware V01.01.02.07 adds native support for the CyberBrick time-lapse kit. It also expands the H2D’s onboard AI failure detection system, now giving users the ability to individually toggle detection functions for nozzle clumping, spaghetti printing, air printing, and purge chute pile-ups from the printer’s interface. Hardware compatibility has been further extended. The AMS 2 Pro and AMS HT systems now support RFID-based automatic matching of drying parameters and can perform drying operations without rotating spools. Additionally, the Laser & Cut module can now initiate tasks directly from USB drive files, improving workflow support. Performance updates include improved foreign object detection on the smooth PEI plate, better regulation of heatbed temperatures, enhanced first-layer quality, more reliable chamber temperature checks before printing begins, and improved accuracy of laser module flame detection. The update also enhances the accuracy of nozzle clumping and nozzle camera dirty detection, while optimizing the pre-purging strategy. A collision issue between the nozzle flow blocker and nozzle wiper—previously triggered during flow dynamics calibration—has been resolved. Calibration reliability for the liveview camera has also improved, and issues with pre-extrusion lines sticking to prints during layer transitions have been addressed. Bambu Lab H2D Launch. Image via Bambu Lab. However, two known issues remain in this beta release: detection of filament PTFE tube detachment is currently disabled, and users cannot adjust heatbed temperature via the Bambu Handy app. The latter is expected to be fixed in a future app update. This version replaces V01.01.02.04, which was briefly released on May 20 before being withdrawn due to a critical calibration bug. That earlier version caused the right nozzle to crash into the wiper during left-nozzle calibration, damaging the printer. The firmware also temporarily disabled filament detachment detection. Bambu Lab quickly pulled the update and advised users to revert to the previous stable firmware while working on a corrected release—now realized in version V01.01.02.07. Accessing the Firmware To access the beta firmware, users can opt into the Public Beta Program through the Bambu Handy app by navigating to the “Me” section and selecting “Beta Firmware Program.” Once enrolled, the update will be rolled out gradually. Participants can leave the program at any time and revert to the most recent stable firmware version. Bambu Lab recommends updating Bambu Studio Presets before installing the firmware to ensure full compatibility. Full technical documentation and the official changelog are available on Bambu Lab’s website. Bambu Lab Hardware Line: H2D and Beyond The new firmware update applies to the H2D 3D printer, Bambu Lab’s flagship desktop manufacturing system unveiled in March 2025. Designed for professional users, the H2D offers the company’s largest build volume to date—350 x 320 x 325 mm—and includes two new AMS systems with integrated filament drying. Dual-nozzle extrusion and servo-driven precision deliver high accuracy, while a 350°C hotend and 65°C heated chamber allow reliable printing with high-performance, fiber-reinforced materials. With a toolhead speed of up to 1000 mm/s and acceleration of 20,000 mm/s², the H2D is built for productivity without compromising quality. The Bambu Lab H2D’s digital cutter. Image via Bambu Lab. Bambu Lab’s broader portfolio also includes the X1E, released in 2023 as an enterprise-grade upgrade to its X1 series. Developed with professional and engineering applications in mind, the X1E features LAN-only connectivity for secure, offline operation, enhanced air filtration, and precise thermal regulation. An increased maximum nozzle temperature expands its material compatibility, making it suitable for demanding industrial applications. At its core, the X1E builds on the proven performance of the X1 Carbon, extending the system’s capabilities for use in sensitive or regulated environments. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Who won the 2024 3D Printing Industry Awards? Subscribe to the3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Bambu Lab H2D Launch. Image via Bambu Lab. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com