• Sienna Net-Zero Home / billionBricks

    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines

    Architects:
    billionBricks
    Area
    Area of this architecture project

    Area: 
    45 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs
    this picture!
    Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated.

    Project gallerySee allShow less
    About this officebillionBricksOffice•••
    Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #sienna #netzero #home #billionbricks
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #sienna #netzero #home #billionbricks
    WWW.ARCHDAILY.COM
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs Save this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.Save this picture!Save this picture!Save this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.Save this picture!Save this picture!Save this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.Save this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.Save this picture!Save this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.Save this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.Save this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . <https://www.archdaily.com/1031072/sienna-billionbricks&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Commentaires 0 Parts
  • Studio Egret West sends in plans for Albert Bridge House redevelopment in Manchester

    The revised schemeSource: Studio Egret West

    The revised schemeSource: Studio Egret West

    The revised schemeSource: Studio Egret West

    The revised schemeSource: Studio Egret West

    The revised schemeSource: Studio Egret West

    1/5
    show caption

    Plans for a revised mixed-use scheme in Manchester have been sent in to local planners.
    The original scheme for the redevelopment of Albert Bridge House, drawn up by Studio Egret West and which was given a resolution to grant planning two years ago, had proposed development of just over 1 million sq ft of commercial space along with just over 350 build-to-rent homes.
    But developer Oval Real Estate has since had a rethink because “the financial landscape has shifted significantly”. It added: “As such, our new proposals have been developed in response to this challenge and to better align with current market needs and community priorities.”
    In a LinkedIn post, Studio Egret West added: “Whilst the earlier design featured a single residential tower and an expansive commercial office block, changing economic conditions have necessitated a rethinking of its scale and delivery strategy.”
    The new plan has more than doubled the number of build-to-rent homes to around 800 across two blocks of 49 and 37 storeys.
    The commercial space has been pared back to around 250,000 sq ft across a 17-storey block.

    Source: Studio Egret WestThe previously consented scheme
    The 1.2 ha site includes a vacant 1950s office building formerly occupied by HMRC, a surface-level car park and the adjacent Albert Bridge Gardens
    Across the site, new public realm is proposed, including an expanded riverside walk, new play areas and an “urban arboretum” that incorporates existing mature trees on the plot.
    Studio Egret West is acting as architect, landscape architect and principal designer for the scheme, with others working on the scheme include planning consultant Deloitte, QS Cumming Group, structural and civil engineer AKT II and M&E engineer Hoare Lea.
    #studio #egret #west #sends #plans
    Studio Egret West sends in plans for Albert Bridge House redevelopment in Manchester
    The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West 1/5 show caption Plans for a revised mixed-use scheme in Manchester have been sent in to local planners. The original scheme for the redevelopment of Albert Bridge House, drawn up by Studio Egret West and which was given a resolution to grant planning two years ago, had proposed development of just over 1 million sq ft of commercial space along with just over 350 build-to-rent homes. But developer Oval Real Estate has since had a rethink because “the financial landscape has shifted significantly”. It added: “As such, our new proposals have been developed in response to this challenge and to better align with current market needs and community priorities.” In a LinkedIn post, Studio Egret West added: “Whilst the earlier design featured a single residential tower and an expansive commercial office block, changing economic conditions have necessitated a rethinking of its scale and delivery strategy.” The new plan has more than doubled the number of build-to-rent homes to around 800 across two blocks of 49 and 37 storeys. The commercial space has been pared back to around 250,000 sq ft across a 17-storey block. Source: Studio Egret WestThe previously consented scheme The 1.2 ha site includes a vacant 1950s office building formerly occupied by HMRC, a surface-level car park and the adjacent Albert Bridge Gardens Across the site, new public realm is proposed, including an expanded riverside walk, new play areas and an “urban arboretum” that incorporates existing mature trees on the plot. Studio Egret West is acting as architect, landscape architect and principal designer for the scheme, with others working on the scheme include planning consultant Deloitte, QS Cumming Group, structural and civil engineer AKT II and M&E engineer Hoare Lea. #studio #egret #west #sends #plans
    WWW.BDONLINE.CO.UK
    Studio Egret West sends in plans for Albert Bridge House redevelopment in Manchester
    The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West The revised schemeSource: Studio Egret West 1/5 show caption Plans for a revised mixed-use scheme in Manchester have been sent in to local planners. The original scheme for the redevelopment of Albert Bridge House, drawn up by Studio Egret West and which was given a resolution to grant planning two years ago, had proposed development of just over 1 million sq ft of commercial space along with just over 350 build-to-rent homes. But developer Oval Real Estate has since had a rethink because “the financial landscape has shifted significantly”. It added: “As such, our new proposals have been developed in response to this challenge and to better align with current market needs and community priorities.” In a LinkedIn post, Studio Egret West added: “Whilst the earlier design featured a single residential tower and an expansive commercial office block, changing economic conditions have necessitated a rethinking of its scale and delivery strategy.” The new plan has more than doubled the number of build-to-rent homes to around 800 across two blocks of 49 and 37 storeys. The commercial space has been pared back to around 250,000 sq ft across a 17-storey block. Source: Studio Egret WestThe previously consented scheme The 1.2 ha site includes a vacant 1950s office building formerly occupied by HMRC, a surface-level car park and the adjacent Albert Bridge Gardens Across the site, new public realm is proposed, including an expanded riverside walk, new play areas and an “urban arboretum” that incorporates existing mature trees on the plot. Studio Egret West is acting as architect, landscape architect and principal designer for the scheme, with others working on the scheme include planning consultant Deloitte, QS Cumming Group, structural and civil engineer AKT II and M&E engineer Hoare Lea.
    0 Commentaires 0 Parts
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentaires 0 Parts
  • Devs are considering quitting en masse because of embarrassing legacy tech, survey finds

    Developers are increasingly quitting jobs over outdated tech stacks, citing embarrassment, poor morale, and dysfunctional CMS tools as major reasons for career rethinking.
    #devs #are #considering #quitting #masse
    Devs are considering quitting en masse because of embarrassing legacy tech, survey finds
    Developers are increasingly quitting jobs over outdated tech stacks, citing embarrassment, poor morale, and dysfunctional CMS tools as major reasons for career rethinking. #devs #are #considering #quitting #masse
    WWW.TECHRADAR.COM
    Devs are considering quitting en masse because of embarrassing legacy tech, survey finds
    Developers are increasingly quitting jobs over outdated tech stacks, citing embarrassment, poor morale, and dysfunctional CMS tools as major reasons for career rethinking.
    0 Commentaires 0 Parts
  • Call for entries: Together, Let's All Go to the Sports Center!

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    The Faculté de l'aménagement at the Université de Montréal is pleased to announce the launch of an international, multidisciplinary and anonymous ideas competition, reserved for students, to create inclusive experiences at the CEPSUM, the Université de Montréal's sports center.With a total of in prizes, the competition promotes the idea of invisible accessibility, an experience of the built environment that is of high quality to all, where the design of accessibility is integrated in an indistinguishable manner, and where universal accessibility is envisaged as a global state of the project experience, rather than a dedicated path made up of identifiable and visible solutions.Participants are invited to propose transformative ideas that offer inclusive and equitable experiences for all users. The competition is structured around three typical sports center experiences that are not currently universally accessible:1. The main entrance - Rethinking the entrance and reception of the sports center;2. Carabins stadium - Improve the game-going experience;3. The pool - Creating an inclusive swimming experience.The proposals received over the summer will be evaluated by a multidisciplinary jury of eight experts. For each of the three experiences, three winning projects will be selected, making a total of nine winners.All proposals will be presented in October 2025 at a conference organized by the Faculté de l'aménagement, bringing together researchers working on accessibility in the built environment."We warmly welcome students from around the world to propose bold, creative ideas that reimagine universal accessibility—not as an add-on, but as an integral, seamless, and uplifting experience for everyone, says Carmela Cucuzzella, Dean of the Faculté de l'aménagement. We are looking for designs that are not only inclusive but also invisible in their accommodation, free of stigma, and full of delight and safety. Think beyond the box—then break it wide open.""A public space that is not accessible to everyone cannot be considered public, says Bechara Helal, Associate Dean of Research and Scientific Life. It is high time to rethink the place of universal accessibility in design disciplines, and that is what this competition aims to do: define innovative ways of designing the built environment so that it can become the setting for quality public experiences for all."About the Faculty of Environmental DesignThe mission of the Faculty of Environmental Design is to train high-calibre professionals and researchers who are qualified to contribute to progress and innovation in design practices. The Faculty is recognized for offering students a rich and stimulating learning environment in a context of intellectual freedom in which students acquire innovative work methods and develop the kind of critical thinking and professional discipline that allow them to become responsible citizens aware of the issues facing them.PrizesAn eight-member multidisciplinary jury will review the proposals. There will be three winning projects given out for each of the three experiences:First place: CAD Second place: CAD Bronze third place: CADAdditionally, each winner will get a certificate of merit.Registrations are open until July 1, 2025. Submission deadline is August 05, 2025 02:00 PM. The top image in the article courtesy of The Faculté de l'aménagement. > via University of Montreal - Faculty of Environmental Design
    #call #entries #together #letampamp039s #all
    Call for entries: Together, Let's All Go to the Sports Center!
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The Faculté de l'aménagement at the Université de Montréal is pleased to announce the launch of an international, multidisciplinary and anonymous ideas competition, reserved for students, to create inclusive experiences at the CEPSUM, the Université de Montréal's sports center.With a total of in prizes, the competition promotes the idea of invisible accessibility, an experience of the built environment that is of high quality to all, where the design of accessibility is integrated in an indistinguishable manner, and where universal accessibility is envisaged as a global state of the project experience, rather than a dedicated path made up of identifiable and visible solutions.Participants are invited to propose transformative ideas that offer inclusive and equitable experiences for all users. The competition is structured around three typical sports center experiences that are not currently universally accessible:1. The main entrance - Rethinking the entrance and reception of the sports center;2. Carabins stadium - Improve the game-going experience;3. The pool - Creating an inclusive swimming experience.The proposals received over the summer will be evaluated by a multidisciplinary jury of eight experts. For each of the three experiences, three winning projects will be selected, making a total of nine winners.All proposals will be presented in October 2025 at a conference organized by the Faculté de l'aménagement, bringing together researchers working on accessibility in the built environment."We warmly welcome students from around the world to propose bold, creative ideas that reimagine universal accessibility—not as an add-on, but as an integral, seamless, and uplifting experience for everyone, says Carmela Cucuzzella, Dean of the Faculté de l'aménagement. We are looking for designs that are not only inclusive but also invisible in their accommodation, free of stigma, and full of delight and safety. Think beyond the box—then break it wide open.""A public space that is not accessible to everyone cannot be considered public, says Bechara Helal, Associate Dean of Research and Scientific Life. It is high time to rethink the place of universal accessibility in design disciplines, and that is what this competition aims to do: define innovative ways of designing the built environment so that it can become the setting for quality public experiences for all."About the Faculty of Environmental DesignThe mission of the Faculty of Environmental Design is to train high-calibre professionals and researchers who are qualified to contribute to progress and innovation in design practices. The Faculty is recognized for offering students a rich and stimulating learning environment in a context of intellectual freedom in which students acquire innovative work methods and develop the kind of critical thinking and professional discipline that allow them to become responsible citizens aware of the issues facing them.PrizesAn eight-member multidisciplinary jury will review the proposals. There will be three winning projects given out for each of the three experiences:First place: CAD Second place: CAD Bronze third place: CADAdditionally, each winner will get a certificate of merit.Registrations are open until July 1, 2025. Submission deadline is August 05, 2025 02:00 PM. The top image in the article courtesy of The Faculté de l'aménagement. > via University of Montreal - Faculty of Environmental Design #call #entries #together #letampamp039s #all
    WORLDARCHITECTURE.ORG
    Call for entries: Together, Let's All Go to the Sports Center!
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The Faculté de l'aménagement at the Université de Montréal is pleased to announce the launch of an international, multidisciplinary and anonymous ideas competition, reserved for students, to create inclusive experiences at the CEPSUM, the Université de Montréal's sports center.With a total of $31,500 in prizes, the competition promotes the idea of invisible accessibility, an experience of the built environment that is of high quality to all, where the design of accessibility is integrated in an indistinguishable manner, and where universal accessibility is envisaged as a global state of the project experience, rather than a dedicated path made up of identifiable and visible solutions.Participants are invited to propose transformative ideas that offer inclusive and equitable experiences for all users. The competition is structured around three typical sports center experiences that are not currently universally accessible:1. The main entrance - Rethinking the entrance and reception of the sports center;2. Carabins stadium - Improve the game-going experience;3. The pool - Creating an inclusive swimming experience.The proposals received over the summer will be evaluated by a multidisciplinary jury of eight experts. For each of the three experiences, three winning projects will be selected, making a total of nine winners.All proposals will be presented in October 2025 at a conference organized by the Faculté de l'aménagement, bringing together researchers working on accessibility in the built environment."We warmly welcome students from around the world to propose bold, creative ideas that reimagine universal accessibility—not as an add-on, but as an integral, seamless, and uplifting experience for everyone, says Carmela Cucuzzella, Dean of the Faculté de l'aménagement. We are looking for designs that are not only inclusive but also invisible in their accommodation, free of stigma, and full of delight and safety. Think beyond the box—then break it wide open.""A public space that is not accessible to everyone cannot be considered public, says Bechara Helal, Associate Dean of Research and Scientific Life. It is high time to rethink the place of universal accessibility in design disciplines, and that is what this competition aims to do: define innovative ways of designing the built environment so that it can become the setting for quality public experiences for all."About the Faculty of Environmental DesignThe mission of the Faculty of Environmental Design is to train high-calibre professionals and researchers who are qualified to contribute to progress and innovation in design practices. The Faculty is recognized for offering students a rich and stimulating learning environment in a context of intellectual freedom in which students acquire innovative work methods and develop the kind of critical thinking and professional discipline that allow them to become responsible citizens aware of the issues facing them.PrizesAn eight-member multidisciplinary jury will review the proposals. There will be three winning projects given out for each of the three experiences:First place (gold): CAD $6,000.Second place (silver): CAD $3,000.Bronze third place: $1,500.00 CADAdditionally, each winner will get a certificate of merit.Registrations are open until July 1, 2025. Submission deadline is August 05, 2025 02:00 PM. The top image in the article courtesy of The Faculté de l'aménagement. > via University of Montreal - Faculty of Environmental Design
    Like
    Love
    Wow
    Sad
    Angry
    447
    0 Commentaires 0 Parts