• Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth

    Key Takeaways

    Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy.
    Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips.
    The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies.

    The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided.
    The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing.
    Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research.

    Itis the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang

    Dell Going All in on AI
    This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses. 
    This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software.

    Image Credit – Dell
    For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services.
    The demand for Dell’s AI servers has also increased, reaching B in the first quarter of 2025, with a total backlog of B, which suggests a strong future demand and order book. The company has set a bold profit forecast between B and B, as against the analysts’ prediction of $ 25.05 B.
    With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries.
    Focus on Energy Efficiency
    Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028. 

    Image Credit – DIGITIMES Asian
    The data storage industry currently produces only 1-2 zettabytesof storage annually, which is much lower than what would be required in the next 4-5 years.
    At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well. 
    Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency.

    Direct Liquid Coolingincreased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%.
    Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs.
    Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements.
    Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken.

    As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hoursby 2030. For comparison, this is more than the total electricity consumption of Japan today.
    The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals.
    Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade.
    The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #dell #nvidia #power #next #generation
    Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth
    Key Takeaways Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy. Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips. The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies. The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided. The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing. Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research. Itis the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang Dell Going All in on AI This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses.  This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software. Image Credit – Dell For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services. The demand for Dell’s AI servers has also increased, reaching B in the first quarter of 2025, with a total backlog of B, which suggests a strong future demand and order book. The company has set a bold profit forecast between B and B, as against the analysts’ prediction of $ 25.05 B. With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries. Focus on Energy Efficiency Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028.  Image Credit – DIGITIMES Asian The data storage industry currently produces only 1-2 zettabytesof storage annually, which is much lower than what would be required in the next 4-5 years. At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well.  Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency. Direct Liquid Coolingincreased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%. Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs. Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements. Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken. As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hoursby 2030. For comparison, this is more than the total electricity consumption of Japan today. The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals. Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade. The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #dell #nvidia #power #next #generation
    Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth
    techreport.com
    Key Takeaways Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy. Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips. The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies. The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided. The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing. Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research. It (supercomputers) is the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang Dell Going All in on AI This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses.  This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software. Image Credit – Dell For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services. The demand for Dell’s AI servers has also increased, reaching $12.1B in the first quarter of 2025, with a total backlog of $14.4B, which suggests a strong future demand and order book. The company has set a bold profit forecast between $28.5B and $29.5B, as against the analysts’ prediction of $ 25.05 B. With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries. Focus on Energy Efficiency Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028.  Image Credit – DIGITIMES Asian The data storage industry currently produces only 1-2 zettabytes (1 zettabyte equals 1 trillion gigabytes) of storage annually, which is much lower than what would be required in the next 4-5 years. At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well.  Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency. Direct Liquid Cooling (DLC) increased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%. Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs. Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements. Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken. As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hours (TWh) by 2030. For comparison, this is more than the total electricity consumption of Japan today. The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals. Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade. The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Here are the nuclear fission startups backed by Big Tech

    Artificial intelligence has sent demand for electricity skyrocketing in the U.S. after years of virtually zero growth. That has sent Big Tech companies scrambling to secure generating capacity for their data centers.
    For many, that has meant turning to nuclear fission. The power source has been experiencing a resurgence in the last few years following decades of plant closures.For tech companies, part of the appeal of fission is a stable, predictable source of power that flows 24/7, giving their data centers the potential to run computing loads whenever they require it. 
    But another part of the appeal lies in new reactor designs that promise to overcome the shortcomings of existing nuclear power plants. Where old power plants were built around massive reactors that could generate over 1 gigawatt of electricity, new small modular reactordesigns see multiple modules deployed alongside each other to meet a range of needs. 
    SMRs rely on mass manufacturing to bring costs down, but to date, no one has built one in the U.S. Still, that hasn’t kept Amazon, Google, Meta, and Microsoft away from the table. They’ve either signed agreements to buy power from nuclear startups or invested in them directly — or both.
    Here are the nuclear fission startups backed by Big Tech.
    Kairos Power
    Kairos Power received a vote of confidence from Google when the search giant promised to buy around 500 megawatts of electricity by 2035, with the first reactor targeted to come online by 2030.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The company’s small modular reactors rely on molten fluoride salt for cooling and to transport heat to a steam turbine. The salt’s high boiling point means that the coolant doesn’t need to be kept at high pressure, which should improve operating safety. The reactors contain fuel pebbles coated in carbon and ceramic shells, which should be strong enough to withstand a meltdown.
    The Alameda-based startup has received a million award from the U.S. government, including million from the Department of Energy. In November 2024, Kairos received approval from the U.S. Nuclear Regulatory Commission to commence construction on two reactors in Tennessee. At 35 megawatts, the test units will be smaller than Kairos’ eventual commercial reactors, which are expected to generate 75 megawatts each.
    Oklo
    Oklo is another SMR company targeting the data center world — no surprise given that it was backed by OpenAI CEO Sam Altman, who also took the nuclear startup public via a reverse merger with his special purpose acquisition vehicle, AltC, in July 2023. Altman served as chairman of Oklo until April, when he stepped down as OpenAI began negotiating with Oklo for an energy supply agreement. DCVC, Draper Associates, and Peter Thiel’s Mithril Capital Management are among the startup’s previous investors.
    Cooled by liquid metal, Oklo’s reactor is based on an existing U.S. Department of Energy design that’s intended to reduce the amount of nuclear waste that results from regular operations. Still, Oklo’s path hasn’t been a smooth one. The company’s first license application was denied in January 2022. Oklo has said it will resubmit the application sometime in 2025. But that hasn’t stopped the company from landing a deal to supply data center operator Switch with 12 gigawatts by 2044.
    Saltfoss
    Like Kairos, Saltfoss, formerly known as Seaborg, also wants to build SMRs cooled by molten salt. But unlike Kairos and others, it envisions placing two to eight of them on a ship to create what it calls a Power Barge. The startup has raised nearly million, including a million seed round that included investments from Bill Gates, Peter Thiel, and Unity co-founder David Helgason, according to PitchBook. Satlfoss has an agreement with Samsung Heavy Industries to build the ships and the Satlfoss-designed reactors.
    TerraPower
    Founded by Bill Gates, TerraPower is building a larger reactor, called Natrium, which is cooled by liquid sodium and features molten salt energy storage.
    The company broke ground on the first power plant in June 2024 in Wyoming. The Natrium design calls for the reactor to generate 345 megawatts of electricity. That’s smaller than other new nuclear plants today but larger than most SMR designs. 
    But Natrium has a trick up its sleeve with its molten salt heat storage system. Since nuclear reactors operate best at a steady state, the Natrium reactor can continue breaking atoms when demand is low, and the extra energy is stored as heat in a vat of molten salt, which can be drawn upon later to generate electricity.
    Investors include Gates’ Cascade Investment fund, Khosla Ventures, CRV, and ArcelorMittal.
    X-Energy
    X-Energy landed a hefty million Series C-1 last year led by Amazon’s Climate Pledge Fund. At the same time, the SMR startup announced two development agreements that would see the deployment of 300 megawatts of new nuclear generating capacity in the Pacific Northwest and Virginia.
    The company’s high-temperature, gas-cooled reactors buck recent trends in the U.S. and Europe, where the design has been shunned in favor of other approaches. The company’s Xe-100 reactor is expected to generate 80 megawatts of electricity. Helium gas flows through the reactor’s 200,000 billiard ball-sized fuel “pebbles,” absorbing heat to spin a steam turbine. 
    #here #are #nuclear #fission #startups
    Here are the nuclear fission startups backed by Big Tech
    Artificial intelligence has sent demand for electricity skyrocketing in the U.S. after years of virtually zero growth. That has sent Big Tech companies scrambling to secure generating capacity for their data centers. For many, that has meant turning to nuclear fission. The power source has been experiencing a resurgence in the last few years following decades of plant closures.For tech companies, part of the appeal of fission is a stable, predictable source of power that flows 24/7, giving their data centers the potential to run computing loads whenever they require it.  But another part of the appeal lies in new reactor designs that promise to overcome the shortcomings of existing nuclear power plants. Where old power plants were built around massive reactors that could generate over 1 gigawatt of electricity, new small modular reactordesigns see multiple modules deployed alongside each other to meet a range of needs.  SMRs rely on mass manufacturing to bring costs down, but to date, no one has built one in the U.S. Still, that hasn’t kept Amazon, Google, Meta, and Microsoft away from the table. They’ve either signed agreements to buy power from nuclear startups or invested in them directly — or both. Here are the nuclear fission startups backed by Big Tech. Kairos Power Kairos Power received a vote of confidence from Google when the search giant promised to buy around 500 megawatts of electricity by 2035, with the first reactor targeted to come online by 2030. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The company’s small modular reactors rely on molten fluoride salt for cooling and to transport heat to a steam turbine. The salt’s high boiling point means that the coolant doesn’t need to be kept at high pressure, which should improve operating safety. The reactors contain fuel pebbles coated in carbon and ceramic shells, which should be strong enough to withstand a meltdown. The Alameda-based startup has received a million award from the U.S. government, including million from the Department of Energy. In November 2024, Kairos received approval from the U.S. Nuclear Regulatory Commission to commence construction on two reactors in Tennessee. At 35 megawatts, the test units will be smaller than Kairos’ eventual commercial reactors, which are expected to generate 75 megawatts each. Oklo Oklo is another SMR company targeting the data center world — no surprise given that it was backed by OpenAI CEO Sam Altman, who also took the nuclear startup public via a reverse merger with his special purpose acquisition vehicle, AltC, in July 2023. Altman served as chairman of Oklo until April, when he stepped down as OpenAI began negotiating with Oklo for an energy supply agreement. DCVC, Draper Associates, and Peter Thiel’s Mithril Capital Management are among the startup’s previous investors. Cooled by liquid metal, Oklo’s reactor is based on an existing U.S. Department of Energy design that’s intended to reduce the amount of nuclear waste that results from regular operations. Still, Oklo’s path hasn’t been a smooth one. The company’s first license application was denied in January 2022. Oklo has said it will resubmit the application sometime in 2025. But that hasn’t stopped the company from landing a deal to supply data center operator Switch with 12 gigawatts by 2044. Saltfoss Like Kairos, Saltfoss, formerly known as Seaborg, also wants to build SMRs cooled by molten salt. But unlike Kairos and others, it envisions placing two to eight of them on a ship to create what it calls a Power Barge. The startup has raised nearly million, including a million seed round that included investments from Bill Gates, Peter Thiel, and Unity co-founder David Helgason, according to PitchBook. Satlfoss has an agreement with Samsung Heavy Industries to build the ships and the Satlfoss-designed reactors. TerraPower Founded by Bill Gates, TerraPower is building a larger reactor, called Natrium, which is cooled by liquid sodium and features molten salt energy storage. The company broke ground on the first power plant in June 2024 in Wyoming. The Natrium design calls for the reactor to generate 345 megawatts of electricity. That’s smaller than other new nuclear plants today but larger than most SMR designs.  But Natrium has a trick up its sleeve with its molten salt heat storage system. Since nuclear reactors operate best at a steady state, the Natrium reactor can continue breaking atoms when demand is low, and the extra energy is stored as heat in a vat of molten salt, which can be drawn upon later to generate electricity. Investors include Gates’ Cascade Investment fund, Khosla Ventures, CRV, and ArcelorMittal. X-Energy X-Energy landed a hefty million Series C-1 last year led by Amazon’s Climate Pledge Fund. At the same time, the SMR startup announced two development agreements that would see the deployment of 300 megawatts of new nuclear generating capacity in the Pacific Northwest and Virginia. The company’s high-temperature, gas-cooled reactors buck recent trends in the U.S. and Europe, where the design has been shunned in favor of other approaches. The company’s Xe-100 reactor is expected to generate 80 megawatts of electricity. Helium gas flows through the reactor’s 200,000 billiard ball-sized fuel “pebbles,” absorbing heat to spin a steam turbine.  #here #are #nuclear #fission #startups
    Here are the nuclear fission startups backed by Big Tech
    techcrunch.com
    Artificial intelligence has sent demand for electricity skyrocketing in the U.S. after years of virtually zero growth. That has sent Big Tech companies scrambling to secure generating capacity for their data centers. For many, that has meant turning to nuclear fission. The power source has been experiencing a resurgence in the last few years following decades of plant closures. (Fission, used in all current nuclear plants, is distinct from fusion, the still-experimental approach to getting power from atoms that, while attracting investors, has yet to produce more electricity than it consumes.) For tech companies, part of the appeal of fission is a stable, predictable source of power that flows 24/7, giving their data centers the potential to run computing loads whenever they require it.  But another part of the appeal lies in new reactor designs that promise to overcome the shortcomings of existing nuclear power plants. Where old power plants were built around massive reactors that could generate over 1 gigawatt of electricity, new small modular reactor (SMR) designs see multiple modules deployed alongside each other to meet a range of needs.  SMRs rely on mass manufacturing to bring costs down, but to date, no one has built one in the U.S. Still, that hasn’t kept Amazon, Google, Meta, and Microsoft away from the table. They’ve either signed agreements to buy power from nuclear startups or invested in them directly — or both. Here are the nuclear fission startups backed by Big Tech. Kairos Power Kairos Power received a vote of confidence from Google when the search giant promised to buy around 500 megawatts of electricity by 2035, with the first reactor targeted to come online by 2030. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The company’s small modular reactors rely on molten fluoride salt for cooling and to transport heat to a steam turbine. The salt’s high boiling point means that the coolant doesn’t need to be kept at high pressure, which should improve operating safety. The reactors contain fuel pebbles coated in carbon and ceramic shells, which should be strong enough to withstand a meltdown. The Alameda-based startup has received a $629 million award from the U.S. government, including $303 million from the Department of Energy. In November 2024, Kairos received approval from the U.S. Nuclear Regulatory Commission to commence construction on two reactors in Tennessee. At 35 megawatts, the test units will be smaller than Kairos’ eventual commercial reactors, which are expected to generate 75 megawatts each. Oklo Oklo is another SMR company targeting the data center world — no surprise given that it was backed by OpenAI CEO Sam Altman, who also took the nuclear startup public via a reverse merger with his special purpose acquisition vehicle, AltC, in July 2023. Altman served as chairman of Oklo until April, when he stepped down as OpenAI began negotiating with Oklo for an energy supply agreement. DCVC, Draper Associates, and Peter Thiel’s Mithril Capital Management are among the startup’s previous investors. Cooled by liquid metal, Oklo’s reactor is based on an existing U.S. Department of Energy design that’s intended to reduce the amount of nuclear waste that results from regular operations. Still, Oklo’s path hasn’t been a smooth one. The company’s first license application was denied in January 2022. Oklo has said it will resubmit the application sometime in 2025. But that hasn’t stopped the company from landing a deal to supply data center operator Switch with 12 gigawatts by 2044. Saltfoss Like Kairos, Saltfoss, formerly known as Seaborg, also wants to build SMRs cooled by molten salt. But unlike Kairos and others, it envisions placing two to eight of them on a ship to create what it calls a Power Barge. The startup has raised nearly $60 million, including a $6 million seed round that included investments from Bill Gates, Peter Thiel, and Unity co-founder David Helgason, according to PitchBook. Satlfoss has an agreement with Samsung Heavy Industries to build the ships and the Satlfoss-designed reactors. TerraPower Founded by Bill Gates, TerraPower is building a larger reactor, called Natrium, which is cooled by liquid sodium and features molten salt energy storage. The company broke ground on the first power plant in June 2024 in Wyoming. The Natrium design calls for the reactor to generate 345 megawatts of electricity. That’s smaller than other new nuclear plants today but larger than most SMR designs.  But Natrium has a trick up its sleeve with its molten salt heat storage system. Since nuclear reactors operate best at a steady state, the Natrium reactor can continue breaking atoms when demand is low, and the extra energy is stored as heat in a vat of molten salt, which can be drawn upon later to generate electricity. Investors include Gates’ Cascade Investment fund, Khosla Ventures, CRV, and ArcelorMittal. X-Energy X-Energy landed a hefty $700 million Series C-1 last year led by Amazon’s Climate Pledge Fund. At the same time, the SMR startup announced two development agreements that would see the deployment of 300 megawatts of new nuclear generating capacity in the Pacific Northwest and Virginia. The company’s high-temperature, gas-cooled reactors buck recent trends in the U.S. and Europe, where the design has been shunned in favor of other approaches. The company’s Xe-100 reactor is expected to generate 80 megawatts of electricity. Helium gas flows through the reactor’s 200,000 billiard ball-sized fuel “pebbles,” absorbing heat to spin a steam turbine. 
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • AEWIN Selects Fabric8Labs’ ECAM Technology for Edge AI Thermal Management

    Fabric8Labs, a San Diego-based manufacturer specializing in Electrochemical Additive Manufacturing, has been selected by AEWIN Technologies to supply thermal management components for its next generation of Edge AI systems. AEWIN, a provider of high-performance network platforms and a member of the Qisda Business Group, will integrate ECAM-produced copper components into its upcoming cooling infrastructure.
    The partnership addresses increasing thermal challenges in high-density computing environments. Fabric8Labs’ ECAM process enables the additive manufacturing of pure copper structures with high geometric resolution. AEWIN is deploying ECAM-based 3D micro-mesh boiler plates that increase heat exchanger surface area by over 900% and provide thermal improvements greater than 1.3 °C per 100W compared to leading conventional alternatives.
    “Our collaboration with AEWIN represents a significant step forward toward the future of thermal management. We are thrilled to support AEWIN by enabling them to achieve their sustainability targets and meet the growing power demands of advanced AI accelerators,” said Ian Winfield, Vice President of Product & Applications at Fabric8Labs.
    ECAM enables high-resolution, customized designs. Photo via Fabric8Labs.
    AEWIN’s system-level designs are optimized for both PFAS and PFAS-free coolants, supporting various two-phase immersion cooling methodologies. According to Dr. Liu, Director of the Advanced Technical Development Division at AEWIN Technologies, “The exponential growth of data and Edge AI complexity requires the most advanced on-premises computing. Through our advanced system-level design, we are able to leverage Fabric8Labs’ ECAM technology to optimize solutions for high efficiency, power usage effectiveness, and reduced total cost of ownership.”
    The ECAM manufacturing platform enables the production of 3D cooling structures without requiring powder beds or laser-based processes. Fabric8Labs’ approach allows for the fabrication of complex copper geometries suitable for thermal management applications, including capillary network designs that enhance coolant flow at the boiling interface. AEWIN reports that the use of these ECAM-enabled boiler plates supports achieving Power Usage Effectivenessbelow 1.02.
    Founded in 2015, Fabric8Labs develops ECAM systems for electronics, medical devices, communications equipment, and semiconductor manufacturing. Its technology is designed to support dense thermal architectures in data centers and Edge AI infrastructure. The additive process is capable of producing detailed structures with reduced material waste compared to conventional subtractive or powder-based methods.
    AEWIN will exhibit its advanced immersion cooling platform utilizing ECAM-enabled thermal components at Computex 2025, Booth No. M0120.
    3D Printed Thermal Components Expand Across Sectors
    Donkervoort Automobielen, a Dutch supercar manufacturer, recently partnered with Australia-based Conflux Technology to integrate 3D printed water-charge air coolersinto its P24 RS model. Using aluminum alloys and tailored fin geometries, the Conflux-designed WCAC units reduce weight from 16 kg to just 1.4 kg per cooler. By relocating the system into the engine bay and shortening the inlet tract, the new thermal architecture enhances throttle response and packaging efficiency. The additively manufactured design, inspired by Formula 1 cooling technology, was adapted for a road-legal vehicle.
    In another recent example, Alloy Enterprises developed a high-efficiency cold plate for NVIDIA’s H100 PCIe card, addressing power density challenges in advanced computing. The component was fabricated from 6061 aluminum using the company’s proprietary Stack Forging process. It features 180-micron microcapillaries, gyroid infill, and monolithic inlet/outlet channels—all optimized using nTop’s generative design software. With a final weight under 550 grams, the liquid cold plate delivers targeted cooling through simulation-derived internal structures.
    The 3D printed aluminum cold plate. Photo via nTop.
    Ready to discover who won the 20243D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Featured photo shows ECAM enables high-resolution, customized designs. Photo via Fabric8Labs.

    Anyer Tenorio Lara
    Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    #aewin #selects #fabric8labs #ecam #technology
    AEWIN Selects Fabric8Labs’ ECAM Technology for Edge AI Thermal Management
    Fabric8Labs, a San Diego-based manufacturer specializing in Electrochemical Additive Manufacturing, has been selected by AEWIN Technologies to supply thermal management components for its next generation of Edge AI systems. AEWIN, a provider of high-performance network platforms and a member of the Qisda Business Group, will integrate ECAM-produced copper components into its upcoming cooling infrastructure. The partnership addresses increasing thermal challenges in high-density computing environments. Fabric8Labs’ ECAM process enables the additive manufacturing of pure copper structures with high geometric resolution. AEWIN is deploying ECAM-based 3D micro-mesh boiler plates that increase heat exchanger surface area by over 900% and provide thermal improvements greater than 1.3 °C per 100W compared to leading conventional alternatives. “Our collaboration with AEWIN represents a significant step forward toward the future of thermal management. We are thrilled to support AEWIN by enabling them to achieve their sustainability targets and meet the growing power demands of advanced AI accelerators,” said Ian Winfield, Vice President of Product & Applications at Fabric8Labs. ECAM enables high-resolution, customized designs. Photo via Fabric8Labs. AEWIN’s system-level designs are optimized for both PFAS and PFAS-free coolants, supporting various two-phase immersion cooling methodologies. According to Dr. Liu, Director of the Advanced Technical Development Division at AEWIN Technologies, “The exponential growth of data and Edge AI complexity requires the most advanced on-premises computing. Through our advanced system-level design, we are able to leverage Fabric8Labs’ ECAM technology to optimize solutions for high efficiency, power usage effectiveness, and reduced total cost of ownership.” The ECAM manufacturing platform enables the production of 3D cooling structures without requiring powder beds or laser-based processes. Fabric8Labs’ approach allows for the fabrication of complex copper geometries suitable for thermal management applications, including capillary network designs that enhance coolant flow at the boiling interface. AEWIN reports that the use of these ECAM-enabled boiler plates supports achieving Power Usage Effectivenessbelow 1.02. Founded in 2015, Fabric8Labs develops ECAM systems for electronics, medical devices, communications equipment, and semiconductor manufacturing. Its technology is designed to support dense thermal architectures in data centers and Edge AI infrastructure. The additive process is capable of producing detailed structures with reduced material waste compared to conventional subtractive or powder-based methods. AEWIN will exhibit its advanced immersion cooling platform utilizing ECAM-enabled thermal components at Computex 2025, Booth No. M0120. 3D Printed Thermal Components Expand Across Sectors Donkervoort Automobielen, a Dutch supercar manufacturer, recently partnered with Australia-based Conflux Technology to integrate 3D printed water-charge air coolersinto its P24 RS model. Using aluminum alloys and tailored fin geometries, the Conflux-designed WCAC units reduce weight from 16 kg to just 1.4 kg per cooler. By relocating the system into the engine bay and shortening the inlet tract, the new thermal architecture enhances throttle response and packaging efficiency. The additively manufactured design, inspired by Formula 1 cooling technology, was adapted for a road-legal vehicle. In another recent example, Alloy Enterprises developed a high-efficiency cold plate for NVIDIA’s H100 PCIe card, addressing power density challenges in advanced computing. The component was fabricated from 6061 aluminum using the company’s proprietary Stack Forging process. It features 180-micron microcapillaries, gyroid infill, and monolithic inlet/outlet channels—all optimized using nTop’s generative design software. With a final weight under 550 grams, the liquid cold plate delivers targeted cooling through simulation-derived internal structures. The 3D printed aluminum cold plate. Photo via nTop. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Featured photo shows ECAM enables high-resolution, customized designs. Photo via Fabric8Labs. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. #aewin #selects #fabric8labs #ecam #technology
    AEWIN Selects Fabric8Labs’ ECAM Technology for Edge AI Thermal Management
    3dprintingindustry.com
    Fabric8Labs, a San Diego-based manufacturer specializing in Electrochemical Additive Manufacturing (ECAM), has been selected by AEWIN Technologies to supply thermal management components for its next generation of Edge AI systems. AEWIN, a provider of high-performance network platforms and a member of the Qisda Business Group, will integrate ECAM-produced copper components into its upcoming cooling infrastructure. The partnership addresses increasing thermal challenges in high-density computing environments. Fabric8Labs’ ECAM process enables the additive manufacturing of pure copper structures with high geometric resolution. AEWIN is deploying ECAM-based 3D micro-mesh boiler plates that increase heat exchanger surface area by over 900% and provide thermal improvements greater than 1.3 °C per 100W compared to leading conventional alternatives. “Our collaboration with AEWIN represents a significant step forward toward the future of thermal management. We are thrilled to support AEWIN by enabling them to achieve their sustainability targets and meet the growing power demands of advanced AI accelerators,” said Ian Winfield, Vice President of Product & Applications at Fabric8Labs. ECAM enables high-resolution, customized designs. Photo via Fabric8Labs. AEWIN’s system-level designs are optimized for both PFAS and PFAS-free coolants, supporting various two-phase immersion cooling methodologies. According to Dr. Liu, Director of the Advanced Technical Development Division at AEWIN Technologies, “The exponential growth of data and Edge AI complexity requires the most advanced on-premises computing. Through our advanced system-level design, we are able to leverage Fabric8Labs’ ECAM technology to optimize solutions for high efficiency, power usage effectiveness, and reduced total cost of ownership.” The ECAM manufacturing platform enables the production of 3D cooling structures without requiring powder beds or laser-based processes. Fabric8Labs’ approach allows for the fabrication of complex copper geometries suitable for thermal management applications, including capillary network designs that enhance coolant flow at the boiling interface. AEWIN reports that the use of these ECAM-enabled boiler plates supports achieving Power Usage Effectiveness (PUE) below 1.02. Founded in 2015, Fabric8Labs develops ECAM systems for electronics, medical devices, communications equipment, and semiconductor manufacturing. Its technology is designed to support dense thermal architectures in data centers and Edge AI infrastructure. The additive process is capable of producing detailed structures with reduced material waste compared to conventional subtractive or powder-based methods. AEWIN will exhibit its advanced immersion cooling platform utilizing ECAM-enabled thermal components at Computex 2025, Booth No. M0120. 3D Printed Thermal Components Expand Across Sectors Donkervoort Automobielen, a Dutch supercar manufacturer, recently partnered with Australia-based Conflux Technology to integrate 3D printed water-charge air coolers (WCAC) into its P24 RS model. Using aluminum alloys and tailored fin geometries, the Conflux-designed WCAC units reduce weight from 16 kg to just 1.4 kg per cooler. By relocating the system into the engine bay and shortening the inlet tract, the new thermal architecture enhances throttle response and packaging efficiency. The additively manufactured design, inspired by Formula 1 cooling technology, was adapted for a road-legal vehicle. In another recent example, Alloy Enterprises developed a high-efficiency cold plate for NVIDIA’s H100 PCIe card, addressing power density challenges in advanced computing. The component was fabricated from 6061 aluminum using the company’s proprietary Stack Forging process. It features 180-micron microcapillaries, gyroid infill, and monolithic inlet/outlet channels—all optimized using nTop’s generative design software. With a final weight under 550 grams, the liquid cold plate delivers targeted cooling through simulation-derived internal structures. The 3D printed aluminum cold plate. Photo via nTop. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Featured photo shows ECAM enables high-resolution, customized designs. Photo via Fabric8Labs. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • The Full Nerd: TechTubers debate Computex’s best and worst PC trends

    Welcome to The Full Nerd newsletter—your weekly dose of hardcore hardware talk from the enthusiasts at PCWorld. In it, we dig into the hottest topics from our YouTube show, plus hot tidbits seen across the web.
    This week, we crack open local Taiwanese beers while chatting about Computex—grab a cold one of your ownas you join us on this fine Friday!

    Want this newsletter to come directly to your inbox? Sign up on our website!

    In this episode of The Full Nerd…
    In this episode of The Full Nerd, it’s all things Computex!
    Live from Taiwan, Adam Patrick Murray joins up with Jeff of CraftComputing, Paul of Paul’s Hardware, and Nick of GearSeekers to chat about the highs and lows of their week. With Computex 2025 being a pretty sleepy show, the guys have a more casual two-hour discussion, with more than one tangent about an enthusiast hot topic near and dear to the individual’s heart.

    AI and enterprise servers benefitting us consumers? Nvidia’s hijinks for RTX 5060 review timing? Worst of Computex? Best of Computex? Yep, those are all covered. And a lot more, too.
    Finishing out Computex strong.Willis Lai / Foundry

    I literally did a double-take when Paul described this Computex’s vibe as the “enterprise sector being all sexy.” What? And yet, somehow, the tech industry’s latest favorite buzzword could mean good things for consumers. As Jeff explains that AI’s effect on enterprise servers could have benefits for us at home—like if the bubble bursts and suddenly all that hardware makes it our way. Or as Adam shares from a talk with SilverStone, we could see more powerful cooling solutions get adapted over, like thick radiators. There’s a muscle build waiting to happen.

    Is it a resistance? Or is a reprisal? PC reviewers are upset about Nvidia’s review practices—namely, its decisions for its release of its new RTX 5060 graphics card. Sure, reviewers got samples in hand before the launch, but not a pre-release driver—and the launch happened during Computex. In other words? The inability to run numbers in a timely fashion, meaning potential buyers couldn’t make informed decisions when considering this new 50-series GPU.
    Should reviewers complain about not having functional free cards before launch? It’s not that simple, says Nick. He points out a review sample isn’t free, since so much work goes into running numbers and presenting the data. Readers and viewers expect to have information to guide them, and when reviewers can’t provide it, it’s problematic.

    Aesthetics vs performance—an age-old question, and one that bubbles up as Adam kicks off the Computex disappointments by naming the Hyte X50 & X50 Air.Jeff pushes back, willing to sacrifice a few percent for the joy of looking at something he likes. More disappointing to him? Corsair Air 5400D, the company’s first triple-chamber case that has no panel on one side. And blocks the installation of additional PCI-e add-in cards. But that’s not the only thing that baffled the guys—Paul and Nick have their own nits to pick, too.I’m fully on-board with Paul’s pick for best in show. In fact, I may have decided on my own top pick for PCWorld’s Best of Computex roundup after watching his report from G.Skill’s booth. Memory DIMMs may not sound racy, but a set in neon yellow and neon orange can make you reconsider.But no one can rival Adam’s enthusiasm for his top pick. In fact, he waxes so poetic about scented thermal paste that I’m slightly reconsidering my stance against it. Still don’t think I’d build with it, but okay, I guess I could at least see it in person. Not sure about that baby-diaper smelling one, though.
    But these topics aren’t the whole of the conversation. Strap in for chatter about AMD’s Radeon strategy, the level of consumer interest in power efficiency, fab capacity, and more.
    Bummed you missed the live show? Subscribe now to The Full Nerd YouTube channel, and activate notifications. We also answer viewer questions in real-time! 
    And if you need more hardware talk during the rest of the week, come join our Discord community—it’s full of cool, laid-back nerds.
    This week’s best nerd news
    Some things should be left in the past. Or at least made with aluminum and a shiny clear coat.Foundry
    Hardware, software, we love all the cool stuff meant for nerdy brains.
    This week is chock full of Computex reveals—which are especially exciting because unlike CES, you can mostly count on seeing these products arrive on retail shelves. The only wrinkle? Pricing may not be certain for U.S. residents, due to ongoing fluctuations with tariffs.

    Get an AMD RX 9060 XT, not Nvidia’s RTX 5060 Ti? AMD claims its upcoming Radeon graphics card costs less and performs better than the Nvidia RTX 5060 and RTX 5060 Ti. If reviews agree, this card will be a boon for mid-range gamers upon its June 5 release.
    Microsoft dropped a PC into coolant designed by AI: I have my doubts about AI’s usefulness, but this experiment at Microsoft Build was pretty dang cool. There was even a demo of Forza Motorsport played on the submersed hardware!
    SilverStone made a throwback beige PC case: I’m going to catch heat from the internetfor this, but I hated the beige boxes of the 1990s and still do. However, this retro-style case does come with a lock. And a Turbo button. Hmm.
    Cooler Master’s all-metal case fan is metal as heck: Its Masterfan XT Pro can hit such a high RPMthat the product has to ship with a fin grill for safety. But only on the front. Watch your fingers.
    Noctua brings brown town to AIO coolers: A special kind of person loves Noctua’s signature color scheme. Now you’ll no longer need to choose between love for water cooling and for so much brown and tan.
    A split mechanical gaming keyboard for the masses!: An ergonomic keyboard that doesn’t feel gross when typing? And also a gaming keyboard? Sign me up. Y’all, this thing can be tented.I want Hyte’s X50 case very badly: I mentioned how much I want one in red, right? Adam’s so wrong about the bubbly edges. It’s so refreshing among a sea of sharp-edged boxy cases.
    AMD is dropping a 96-core Threadripper CPU: For when you crave workstation performance but not workstation prices. Ninety-six cores and 128 threads.

    That’s all for this week—for all my fellow U.S. residents, enjoy the long holiday weekend!
    -Alaina
    This newsletter is dedicated to the memory of Gordon Mah Ung, founder and host of The Full Nerd, and executive editor of hardware at PCWorld. Want The Full Nerd newsletter to come directly to your inbox every Friday morning? Sign up on our website!
    #full #nerd #techtubers #debate #computexs
    The Full Nerd: TechTubers debate Computex’s best and worst PC trends
    Welcome to The Full Nerd newsletter—your weekly dose of hardcore hardware talk from the enthusiasts at PCWorld. In it, we dig into the hottest topics from our YouTube show, plus hot tidbits seen across the web. This week, we crack open local Taiwanese beers while chatting about Computex—grab a cold one of your ownas you join us on this fine Friday! Want this newsletter to come directly to your inbox? Sign up on our website! In this episode of The Full Nerd… In this episode of The Full Nerd, it’s all things Computex! Live from Taiwan, Adam Patrick Murray joins up with Jeff of CraftComputing, Paul of Paul’s Hardware, and Nick of GearSeekers to chat about the highs and lows of their week. With Computex 2025 being a pretty sleepy show, the guys have a more casual two-hour discussion, with more than one tangent about an enthusiast hot topic near and dear to the individual’s heart. AI and enterprise servers benefitting us consumers? Nvidia’s hijinks for RTX 5060 review timing? Worst of Computex? Best of Computex? Yep, those are all covered. And a lot more, too. Finishing out Computex strong.Willis Lai / Foundry I literally did a double-take when Paul described this Computex’s vibe as the “enterprise sector being all sexy.” What? And yet, somehow, the tech industry’s latest favorite buzzword could mean good things for consumers. As Jeff explains that AI’s effect on enterprise servers could have benefits for us at home—like if the bubble bursts and suddenly all that hardware makes it our way. Or as Adam shares from a talk with SilverStone, we could see more powerful cooling solutions get adapted over, like thick radiators. There’s a muscle build waiting to happen. Is it a resistance? Or is a reprisal? PC reviewers are upset about Nvidia’s review practices—namely, its decisions for its release of its new RTX 5060 graphics card. Sure, reviewers got samples in hand before the launch, but not a pre-release driver—and the launch happened during Computex. In other words? The inability to run numbers in a timely fashion, meaning potential buyers couldn’t make informed decisions when considering this new 50-series GPU. Should reviewers complain about not having functional free cards before launch? It’s not that simple, says Nick. He points out a review sample isn’t free, since so much work goes into running numbers and presenting the data. Readers and viewers expect to have information to guide them, and when reviewers can’t provide it, it’s problematic. Aesthetics vs performance—an age-old question, and one that bubbles up as Adam kicks off the Computex disappointments by naming the Hyte X50 & X50 Air.Jeff pushes back, willing to sacrifice a few percent for the joy of looking at something he likes. More disappointing to him? Corsair Air 5400D, the company’s first triple-chamber case that has no panel on one side. And blocks the installation of additional PCI-e add-in cards. But that’s not the only thing that baffled the guys—Paul and Nick have their own nits to pick, too.I’m fully on-board with Paul’s pick for best in show. In fact, I may have decided on my own top pick for PCWorld’s Best of Computex roundup after watching his report from G.Skill’s booth. Memory DIMMs may not sound racy, but a set in neon yellow and neon orange can make you reconsider.But no one can rival Adam’s enthusiasm for his top pick. In fact, he waxes so poetic about scented thermal paste that I’m slightly reconsidering my stance against it. Still don’t think I’d build with it, but okay, I guess I could at least see it in person. Not sure about that baby-diaper smelling one, though. But these topics aren’t the whole of the conversation. Strap in for chatter about AMD’s Radeon strategy, the level of consumer interest in power efficiency, fab capacity, and more. Bummed you missed the live show? Subscribe now to The Full Nerd YouTube channel, and activate notifications. We also answer viewer questions in real-time!  And if you need more hardware talk during the rest of the week, come join our Discord community—it’s full of cool, laid-back nerds. This week’s best nerd news Some things should be left in the past. Or at least made with aluminum and a shiny clear coat.Foundry Hardware, software, we love all the cool stuff meant for nerdy brains. This week is chock full of Computex reveals—which are especially exciting because unlike CES, you can mostly count on seeing these products arrive on retail shelves. The only wrinkle? Pricing may not be certain for U.S. residents, due to ongoing fluctuations with tariffs. Get an AMD RX 9060 XT, not Nvidia’s RTX 5060 Ti? AMD claims its upcoming Radeon graphics card costs less and performs better than the Nvidia RTX 5060 and RTX 5060 Ti. If reviews agree, this card will be a boon for mid-range gamers upon its June 5 release. Microsoft dropped a PC into coolant designed by AI: I have my doubts about AI’s usefulness, but this experiment at Microsoft Build was pretty dang cool. There was even a demo of Forza Motorsport played on the submersed hardware! SilverStone made a throwback beige PC case: I’m going to catch heat from the internetfor this, but I hated the beige boxes of the 1990s and still do. However, this retro-style case does come with a lock. And a Turbo button. Hmm. Cooler Master’s all-metal case fan is metal as heck: Its Masterfan XT Pro can hit such a high RPMthat the product has to ship with a fin grill for safety. But only on the front. Watch your fingers. Noctua brings brown town to AIO coolers: A special kind of person loves Noctua’s signature color scheme. Now you’ll no longer need to choose between love for water cooling and for so much brown and tan. A split mechanical gaming keyboard for the masses!: An ergonomic keyboard that doesn’t feel gross when typing? And also a gaming keyboard? Sign me up. Y’all, this thing can be tented.I want Hyte’s X50 case very badly: I mentioned how much I want one in red, right? Adam’s so wrong about the bubbly edges. It’s so refreshing among a sea of sharp-edged boxy cases. AMD is dropping a 96-core Threadripper CPU: For when you crave workstation performance but not workstation prices. Ninety-six cores and 128 threads. That’s all for this week—for all my fellow U.S. residents, enjoy the long holiday weekend! -Alaina This newsletter is dedicated to the memory of Gordon Mah Ung, founder and host of The Full Nerd, and executive editor of hardware at PCWorld. Want The Full Nerd newsletter to come directly to your inbox every Friday morning? Sign up on our website! #full #nerd #techtubers #debate #computexs
    The Full Nerd: TechTubers debate Computex’s best and worst PC trends
    www.pcworld.com
    Welcome to The Full Nerd newsletter—your weekly dose of hardcore hardware talk from the enthusiasts at PCWorld. In it, we dig into the hottest topics from our YouTube show, plus hot tidbits seen across the web. This week, we crack open local Taiwanese beers while chatting about Computex—grab a cold one of your own (or maybe some Kuai Kuai chips?) as you join us on this fine Friday! Want this newsletter to come directly to your inbox? Sign up on our website! In this episode of The Full Nerd… In this episode of The Full Nerd, it’s all things Computex! Live from Taiwan, Adam Patrick Murray joins up with Jeff of CraftComputing, Paul of Paul’s Hardware, and Nick of GearSeekers to chat about the highs and lows of their week. With Computex 2025 being a pretty sleepy show, the guys have a more casual two-hour discussion, with more than one tangent about an enthusiast hot topic near and dear to the individual’s heart. AI and enterprise servers benefitting us consumers? Nvidia’s hijinks for RTX 5060 review timing? Worst of Computex? Best of Computex? Yep, those are all covered. And a lot more, too. Finishing out Computex strong.Willis Lai / Foundry I literally did a double-take when Paul described this Computex’s vibe as the “enterprise sector being all sexy.” What? And yet, somehow, the tech industry’s latest favorite buzzword could mean good things for consumers. As Jeff explains that AI’s effect on enterprise servers could have benefits for us at home—like if the bubble bursts and suddenly all that hardware makes it our way. Or as Adam shares from a talk with SilverStone, we could see more powerful cooling solutions get adapted over, like thick radiators. There’s a muscle build waiting to happen. Is it a resistance? Or is a reprisal? PC reviewers are upset about Nvidia’s review practices—namely, its decisions for its release of its new RTX 5060 graphics card. Sure, reviewers got samples in hand before the launch, but not a pre-release driver—and the launch happened during Computex. In other words? The inability to run numbers in a timely fashion, meaning potential buyers couldn’t make informed decisions when considering this new 50-series GPU. Should reviewers complain about not having functional free cards before launch? It’s not that simple, says Nick. He points out a review sample isn’t free, since so much work goes into running numbers and presenting the data. Readers and viewers expect to have information to guide them, and when reviewers can’t provide it, it’s problematic. Aesthetics vs performance—an age-old question, and one that bubbles up as Adam kicks off the Computex disappointments by naming the Hyte X50 & X50 Air. (He’s very wrong. The X50 in red is going to look so good on my desk.) Jeff pushes back, willing to sacrifice a few percent for the joy of looking at something he likes. More disappointing to him? Corsair Air 5400D, the company’s first triple-chamber case that has no panel on one side. And blocks the installation of additional PCI-e add-in cards. But that’s not the only thing that baffled the guys—Paul and Nick have their own nits to pick, too. (You’ll have to watch the episode for that pun’s context!) I’m fully on-board with Paul’s pick for best in show. In fact, I may have decided on my own top pick for PCWorld’s Best of Computex roundup after watching his report from G.Skill’s booth. Memory DIMMs may not sound racy, but a set in neon yellow and neon orange can make you reconsider. (I prefer the sparkly silver concept finish. Speaking of, go tell G.Skill you like it too, so it becomes a thing.) But no one can rival Adam’s enthusiasm for his top pick. In fact, he waxes so poetic about scented thermal paste that I’m slightly reconsidering my stance against it. Still don’t think I’d build with it, but okay, I guess I could at least see it in person. Not sure about that baby-diaper smelling one, though. But these topics aren’t the whole of the conversation. Strap in for chatter about AMD’s Radeon strategy, the level of consumer interest in power efficiency (it’s the U.S. vs the rest of the world), fab capacity, and more. Bummed you missed the live show? Subscribe now to The Full Nerd YouTube channel, and activate notifications. We also answer viewer questions in real-time!  And if you need more hardware talk during the rest of the week, come join our Discord community—it’s full of cool, laid-back nerds. This week’s best nerd news Some things should be left in the past. Or at least made with aluminum and a shiny clear coat.Foundry Hardware, software, we love all the cool stuff meant for nerdy brains. This week is chock full of Computex reveals—which are especially exciting because unlike CES, you can mostly count on seeing these products arrive on retail shelves. The only wrinkle? Pricing may not be certain for U.S. residents, due to ongoing fluctuations with tariffs. Get an AMD RX 9060 XT, not Nvidia’s RTX 5060 Ti? AMD claims its upcoming Radeon graphics card costs less and performs better than the Nvidia RTX 5060 and RTX 5060 Ti. If reviews agree, this $350 card will be a boon for mid-range gamers upon its June 5 release. Microsoft dropped a PC into coolant designed by AI: I have my doubts about AI’s usefulness, but this experiment at Microsoft Build was pretty dang cool. There was even a demo of Forza Motorsport played on the submersed hardware! SilverStone made a throwback beige PC case: I’m going to catch heat from the internet (and my coworkers) for this, but I hated the beige boxes of the 1990s and still do. However, this retro-style case does come with a lock. And a Turbo button. Hmm. Cooler Master’s all-metal case fan is metal as heck: Its Masterfan XT Pro can hit such a high RPM (4,000) that the product has to ship with a fin grill for safety. But only on the front. Watch your fingers. Noctua brings brown town to AIO coolers: A special kind of person loves Noctua’s signature color scheme (truly, one of our Discord server members is like this and he’s a gem). Now you’ll no longer need to choose between love for water cooling and for so much brown and tan. A split mechanical gaming keyboard for the masses!: An ergonomic keyboard that doesn’t feel gross when typing? And also a gaming keyboard? Sign me up. Y’all, this thing can be tented. (Vertical pitch makes this kind of design way more comfy.) I want Hyte’s X50 case very badly: I mentioned how much I want one in red, right? Adam’s so wrong about the bubbly edges. It’s so refreshing among a sea of sharp-edged boxy cases. AMD is dropping a 96-core Threadripper CPU: For when you crave workstation performance but not workstation prices. Ninety-six cores and 128 threads. That’s all for this week—for all my fellow U.S. residents, enjoy the long holiday weekend! -Alaina This newsletter is dedicated to the memory of Gordon Mah Ung, founder and host of The Full Nerd, and executive editor of hardware at PCWorld. Want The Full Nerd newsletter to come directly to your inbox every Friday morning? Sign up on our website!
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Four reasons to be optimistic about AI’s energy usage

    The day after his inauguration in January, President Donald Trump announced Stargate, a billion initiative to build out AI infrastructure, backed by some of the biggest companies in tech. Stargate aims to accelerate the construction of massive data centers and electricity networks across the US to ensure it keeps its edge over China.

    This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.

    The whatever-it-takes approach to the race for worldwide AI dominance was the talk of Davos, says Raquel Urtasun, founder and CEO of the Canadian robotruck startup Waabi, referring to the World Economic Forum’s annual January meeting in Switzerland, which was held the same week as Trump’s announcement. “I’m pretty worried about where the industry is going,” Urtasun says. 

    She’s not alone. “Dollars are being invested, GPUs are being burned, water is being evaporated—it’s just absolutely the wrong direction,” says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI.

    But sift through the talk of rocketing costs—and climate impact—and you’ll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock.

    Here’s what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI.

    1/ More efficient models

    The most obvious place to start is with the models themselves—the way they’re created and the way they’re run.

    AI models are built by training neural networks on lots and lots of data. Large language models are trained on vast amounts of text, self-driving models are trained on vast amounts of driving data, and so on.

    But the way such data is collected is often indiscriminate. Large language models are trained on data sets that include text scraped from most of the internet and huge libraries of scanned books. The practice has been to grab everything that’s not nailed down, throw it into the mix, and see what comes out. This approach has certainly worked, but training a model on a massive data set over and over so it can extract relevant patterns by itself is a waste of time and energy.

    There might be a more efficient way. Children aren’t expected to learn just by reading everything that’s ever been written; they are given a focused curriculum. Urtasun thinks we should do something similar with AI, training models with more curated data tailored to specific tasks.It’s not just Waabi. Writer, an AI startup that builds large language models for enterprise customers, claims that its models are cheaper to train and run in part because it trains them using synthetic data. Feeding its models bespoke data sets rather than larger but less curated ones makes the training process quicker. For example, instead of simply downloading Wikipedia, the team at Writer takes individual Wikipedia pages and rewrites their contents in different formats—as a Q&A instead of a block of text, and so on—so that its models can learn more from less.

    Training is just the start of a model’s life cycle. As models have become bigger, they have become more expensive to run. So-called reasoning models that work through a query step by step before producing a response are especially power-hungry because they compute a series of intermediate subresponses for each response. The price tag of these new capabilities is eye-watering: OpenAI’s o3 reasoning model has been estimated to cost up to per task to run.  

    But this technology is only a few months old and still experimental. Farhadi expects that these costs will soon come down. For example, engineers will figure out how to stop reasoning models from going too far down a dead-end path before they determine it’s not viable. “The first time you do something it’s way more expensive, and then you figure out how to make it smaller and more efficient,” says Farhadi. “It’s a fairly consistent trend in technology.”

    One way to get performance gains without big jumps in energy consumption is to run inference stepsin parallel, he says. Parallel computing underpins much of today’s software, especially large language models. Even so, the basic technique could be applied to a wider range of problems. By splitting up a task and running different parts of it at the same time, parallel computing can generate results more quickly. It can also save energy by making more efficient use of available hardware. But it requires clever new algorithms to coordinate the multiple subtasks and pull them together into a single result at the end. 

    The largest, most powerful models won’t be used all the time, either. There is a lot of talk about small models, versions of large language models that have been distilled into pocket-size packages. In many cases, these more efficient models perform as well as larger ones, especially for specific use cases.

    As businesses figure out how large language models fit their needs, this trend toward more efficient bespoke models is taking off. You don’t need an all-purpose LLM to manage inventory or to respond to niche customer queries. “There’s going to be a really, really large number of specialized models, not one God-given model that solves everything,” says Farhadi.

    Christina Shim, chief sustainability officer at IBM, is seeing this trend play out in the way her clients adopt the technology. She works with businesses to make sure they choose the smallest and least power-hungry models possible. “It’s not just the biggest model that will give you a big bang for your buck,” she says. A smaller model that does exactly what you need is a better investment than a larger one that does the same thing: “Let’s not use a sledgehammer to hit a nail.”

    2/ More efficient computer chips

    As the software becomes more streamlined, the hardware it runs on will become more efficient too. There’s a tension at play here: In the short term, chipmakers like Nvidia are racing to develop increasingly powerful chips to meet demand from companies wanting to run increasingly powerful models. But in the long term, this race isn’t sustainable.

    “The models have gotten so big, even running the inference step now starts to become a big challenge,” says Naveen Verma, cofounder and CEO of the upstart microchip maker EnCharge AI.

    Companies like Microsoft and OpenAI are losing money running their models inside data centers to meet the demand from millions of people. Smaller models will help. Another option is to move the computing out of the data centers and into people’s own machines.

    That’s something that Microsoft tried with its Copilot+ PC initiative, in which it marketed a supercharged PC that would let you run an AI modelyourself. It hasn’t taken off, but Verma thinks the push will continue because companies will want to offload as much of the costs of running a model as they can.

    But getting AI modelsto run reliably on people’s personal devices will require a step change in the chips that typically power those devices. These chips need to be made even more energy efficient because they need to be able to work with just a battery, says Verma.

    That’s where EnCharge comes in. Its solution is a new kind of chip that ditches digital computation in favor of something called analog in-memory computing. Instead of representing information with binary 0s and 1s, like the electronics inside conventional, digital computer chips, the electronics inside analog chips can represent information along a range of values in between 0 and 1. In theory, this lets you do more with the same amount of power. 

    SHIWEN SVEN WANG

    EnCharge was spun out from Verma’s research lab at Princeton in 2022. “We’ve known for decades that analog compute can be much more efficient—orders of magnitude more efficient—than digital,” says Verma. But analog computers never worked well in practice because they made lots of errors. Verma and his colleagues have discovered a way to do analog computing that’s precise.

    EnCharge is focusing just on the core computation required by AI today. With support from semiconductor giants like TSMC, the startup is developing hardware that performs high-dimensional matrix multiplicationin an analog chip and then passes the result back out to the surrounding digital computer.

    EnCharge’s hardware is just one of a number of experimental new chip designs on the horizon. IBM and others have been exploring something called neuromorphic computing for years. The idea is to design computers that mimic the brain’s super-efficient processing powers. Another path involves optical chips, which swap out the electrons in a traditional chip for light, again cutting the energy required for computation. None of these designs yet come close to competing with the electronic digital chips made by the likes of Nvidia. But as the demand for efficiency grows, such alternatives will be waiting in the wings. 

    It is also not just chips that can be made more efficient. A lot of the energy inside computers is spent passing data back and forth. IBM says that it has developed a new kind of optical switch, a device that controls digital traffic, that is 80% more efficient than previous switches.   

    3/ More efficient cooling in data centers

    Another huge source of energy demand is the need to manage the waste heat produced by the high-end hardware on which AI models run. Tom Earp, engineering director at the design firm Page, has been building data centers since 2006, including a six-year stint doing so for Meta. Earp looks for efficiencies in everything from the structure of the building to the electrical supply, the cooling systems, and the way data is transferred in and out.

    For a decade or more, as Moore’s Law tailed off, data-center designs were pretty stable, says Earp. And then everything changed. With the shift to processors like GPUs, and with even newer chip designs on the horizon, it is hard to predict what kind of hardware a new data center will need to house—and thus what energy demands it will have to support—in a few years’ time. But in the short term the safe bet is that chips will continue getting faster and hotter: “What I see is that the people who have to make these choices are planning for a lot of upside in how much power we’re going to need,” says Earp.

    One thing is clear: The chips that run AI models, such as GPUs, require more power per unit of space than previous types of computer chips. And that has big knock-on implications for the cooling infrastructure inside a data center. “When power goes up, heat goes up,” says Earp.

    With so many high-powered chips squashed together, air coolingis no longer sufficient. Water has become the go-to coolant because it is better than air at whisking heat away. That’s not great news for local water sources around data centers. But there are ways to make water cooling more efficient.

    One option is to use water to send the waste heat from a data center to places where it can be used. In Denmark water from data centers has been used to heat homes. In Paris, during the Olympics, it was used to heat swimming pools.  

    Water can also serve as a type of battery. Energy generated from renewable sources, such as wind turbines or solar panels, can be used to chill water that is stored until it is needed to cool computers later, which reduces the power usage at peak times.

    But as data centers get hotter, water cooling alone doesn’t cut it, says Tony Atti, CEO of Phononic, a startup that supplies specialist cooling chips. Chipmakers are creating chips that move data around faster and faster. He points to Nvidia, which is about to release a chip that processes 1.6 terabytes a second: “At that data rate, all hell breaks loose and the demand for cooling goes up exponentially,” he says.

    According to Atti, the chips inside servers suck up around 45% of the power in a data center. But cooling those chips now takes almost as much power, around 40%. “For the first time, thermal management is becoming the gate to the expansion of this AI infrastructure,” he says.

    Phononic’s cooling chips are small thermoelectric devices that can be placed on or near the hardware that needs cooling. Power an LED chip and it emits photons; power a thermoelectric chip and it emits phonons. In short, phononic chips push heat from one surface to another.

    Squeezed into tight spaces inside and around servers, such chips can detect minute increases in heat and switch on and off to maintain a stable temperature. When they’re on, they push excess heat into a water pipe to be whisked away. Atti says they can also be used to increase the efficiency of existing cooling systems. The faster you can cool water in a data center, the less of it you need.

    4/ Cutting costs goes hand in hand with cutting energy use

    Despite the explosion in AI’s energy use, there’s reason to be optimistic. Sustainability is often an afterthought or a nice-to-have. But with AI, the best way to reduce overall costs is to cut your energy bill. That’s good news, as it should incentivize companies to increase efficiency. “I think we’ve got an alignment between climate sustainability and cost sustainability,” says Verma. ”I think ultimately that will become the big driver that will push the industry to be more energy efficient.”

    Shim agrees: “It’s just good business, you know?”

    Companies will be forced to think hard about how and when they use AI, choosing smaller, bespoke options whenever they can, she says: “Just look at the world right now. Spending on technology, like everything else, is going to be even more critical going forward.”

    Shim thinks the concerns around AI’s energy use are valid. But she points to the rise of the internet and the personal computer boom 25 years ago. As the technology behind those revolutions improved, the energy costs stayed more or less stable even though the number of users skyrocketed, she says.

    It’s a general rule Shim thinks will apply this time around as well: When tech matures, it gets more efficient. “I think that’s where we are right now with AI,” she says.

    AI is fast becoming a commodity, which means that market competition will drive prices down. To stay in the game, companies will be looking to cut energy use for the sake of their bottom line if nothing else. 

    In the end, capitalism may save us after all. 
    #four #reasons #optimistic #about #ais
    Four reasons to be optimistic about AI’s energy usage
    The day after his inauguration in January, President Donald Trump announced Stargate, a billion initiative to build out AI infrastructure, backed by some of the biggest companies in tech. Stargate aims to accelerate the construction of massive data centers and electricity networks across the US to ensure it keeps its edge over China. This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution. The whatever-it-takes approach to the race for worldwide AI dominance was the talk of Davos, says Raquel Urtasun, founder and CEO of the Canadian robotruck startup Waabi, referring to the World Economic Forum’s annual January meeting in Switzerland, which was held the same week as Trump’s announcement. “I’m pretty worried about where the industry is going,” Urtasun says.  She’s not alone. “Dollars are being invested, GPUs are being burned, water is being evaporated—it’s just absolutely the wrong direction,” says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI. But sift through the talk of rocketing costs—and climate impact—and you’ll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock. Here’s what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI. 1/ More efficient models The most obvious place to start is with the models themselves—the way they’re created and the way they’re run. AI models are built by training neural networks on lots and lots of data. Large language models are trained on vast amounts of text, self-driving models are trained on vast amounts of driving data, and so on. But the way such data is collected is often indiscriminate. Large language models are trained on data sets that include text scraped from most of the internet and huge libraries of scanned books. The practice has been to grab everything that’s not nailed down, throw it into the mix, and see what comes out. This approach has certainly worked, but training a model on a massive data set over and over so it can extract relevant patterns by itself is a waste of time and energy. There might be a more efficient way. Children aren’t expected to learn just by reading everything that’s ever been written; they are given a focused curriculum. Urtasun thinks we should do something similar with AI, training models with more curated data tailored to specific tasks.It’s not just Waabi. Writer, an AI startup that builds large language models for enterprise customers, claims that its models are cheaper to train and run in part because it trains them using synthetic data. Feeding its models bespoke data sets rather than larger but less curated ones makes the training process quicker. For example, instead of simply downloading Wikipedia, the team at Writer takes individual Wikipedia pages and rewrites their contents in different formats—as a Q&A instead of a block of text, and so on—so that its models can learn more from less. Training is just the start of a model’s life cycle. As models have become bigger, they have become more expensive to run. So-called reasoning models that work through a query step by step before producing a response are especially power-hungry because they compute a series of intermediate subresponses for each response. The price tag of these new capabilities is eye-watering: OpenAI’s o3 reasoning model has been estimated to cost up to per task to run.   But this technology is only a few months old and still experimental. Farhadi expects that these costs will soon come down. For example, engineers will figure out how to stop reasoning models from going too far down a dead-end path before they determine it’s not viable. “The first time you do something it’s way more expensive, and then you figure out how to make it smaller and more efficient,” says Farhadi. “It’s a fairly consistent trend in technology.” One way to get performance gains without big jumps in energy consumption is to run inference stepsin parallel, he says. Parallel computing underpins much of today’s software, especially large language models. Even so, the basic technique could be applied to a wider range of problems. By splitting up a task and running different parts of it at the same time, parallel computing can generate results more quickly. It can also save energy by making more efficient use of available hardware. But it requires clever new algorithms to coordinate the multiple subtasks and pull them together into a single result at the end.  The largest, most powerful models won’t be used all the time, either. There is a lot of talk about small models, versions of large language models that have been distilled into pocket-size packages. In many cases, these more efficient models perform as well as larger ones, especially for specific use cases. As businesses figure out how large language models fit their needs, this trend toward more efficient bespoke models is taking off. You don’t need an all-purpose LLM to manage inventory or to respond to niche customer queries. “There’s going to be a really, really large number of specialized models, not one God-given model that solves everything,” says Farhadi. Christina Shim, chief sustainability officer at IBM, is seeing this trend play out in the way her clients adopt the technology. She works with businesses to make sure they choose the smallest and least power-hungry models possible. “It’s not just the biggest model that will give you a big bang for your buck,” she says. A smaller model that does exactly what you need is a better investment than a larger one that does the same thing: “Let’s not use a sledgehammer to hit a nail.” 2/ More efficient computer chips As the software becomes more streamlined, the hardware it runs on will become more efficient too. There’s a tension at play here: In the short term, chipmakers like Nvidia are racing to develop increasingly powerful chips to meet demand from companies wanting to run increasingly powerful models. But in the long term, this race isn’t sustainable. “The models have gotten so big, even running the inference step now starts to become a big challenge,” says Naveen Verma, cofounder and CEO of the upstart microchip maker EnCharge AI. Companies like Microsoft and OpenAI are losing money running their models inside data centers to meet the demand from millions of people. Smaller models will help. Another option is to move the computing out of the data centers and into people’s own machines. That’s something that Microsoft tried with its Copilot+ PC initiative, in which it marketed a supercharged PC that would let you run an AI modelyourself. It hasn’t taken off, but Verma thinks the push will continue because companies will want to offload as much of the costs of running a model as they can. But getting AI modelsto run reliably on people’s personal devices will require a step change in the chips that typically power those devices. These chips need to be made even more energy efficient because they need to be able to work with just a battery, says Verma. That’s where EnCharge comes in. Its solution is a new kind of chip that ditches digital computation in favor of something called analog in-memory computing. Instead of representing information with binary 0s and 1s, like the electronics inside conventional, digital computer chips, the electronics inside analog chips can represent information along a range of values in between 0 and 1. In theory, this lets you do more with the same amount of power.  SHIWEN SVEN WANG EnCharge was spun out from Verma’s research lab at Princeton in 2022. “We’ve known for decades that analog compute can be much more efficient—orders of magnitude more efficient—than digital,” says Verma. But analog computers never worked well in practice because they made lots of errors. Verma and his colleagues have discovered a way to do analog computing that’s precise. EnCharge is focusing just on the core computation required by AI today. With support from semiconductor giants like TSMC, the startup is developing hardware that performs high-dimensional matrix multiplicationin an analog chip and then passes the result back out to the surrounding digital computer. EnCharge’s hardware is just one of a number of experimental new chip designs on the horizon. IBM and others have been exploring something called neuromorphic computing for years. The idea is to design computers that mimic the brain’s super-efficient processing powers. Another path involves optical chips, which swap out the electrons in a traditional chip for light, again cutting the energy required for computation. None of these designs yet come close to competing with the electronic digital chips made by the likes of Nvidia. But as the demand for efficiency grows, such alternatives will be waiting in the wings.  It is also not just chips that can be made more efficient. A lot of the energy inside computers is spent passing data back and forth. IBM says that it has developed a new kind of optical switch, a device that controls digital traffic, that is 80% more efficient than previous switches.    3/ More efficient cooling in data centers Another huge source of energy demand is the need to manage the waste heat produced by the high-end hardware on which AI models run. Tom Earp, engineering director at the design firm Page, has been building data centers since 2006, including a six-year stint doing so for Meta. Earp looks for efficiencies in everything from the structure of the building to the electrical supply, the cooling systems, and the way data is transferred in and out. For a decade or more, as Moore’s Law tailed off, data-center designs were pretty stable, says Earp. And then everything changed. With the shift to processors like GPUs, and with even newer chip designs on the horizon, it is hard to predict what kind of hardware a new data center will need to house—and thus what energy demands it will have to support—in a few years’ time. But in the short term the safe bet is that chips will continue getting faster and hotter: “What I see is that the people who have to make these choices are planning for a lot of upside in how much power we’re going to need,” says Earp. One thing is clear: The chips that run AI models, such as GPUs, require more power per unit of space than previous types of computer chips. And that has big knock-on implications for the cooling infrastructure inside a data center. “When power goes up, heat goes up,” says Earp. With so many high-powered chips squashed together, air coolingis no longer sufficient. Water has become the go-to coolant because it is better than air at whisking heat away. That’s not great news for local water sources around data centers. But there are ways to make water cooling more efficient. One option is to use water to send the waste heat from a data center to places where it can be used. In Denmark water from data centers has been used to heat homes. In Paris, during the Olympics, it was used to heat swimming pools.   Water can also serve as a type of battery. Energy generated from renewable sources, such as wind turbines or solar panels, can be used to chill water that is stored until it is needed to cool computers later, which reduces the power usage at peak times. But as data centers get hotter, water cooling alone doesn’t cut it, says Tony Atti, CEO of Phononic, a startup that supplies specialist cooling chips. Chipmakers are creating chips that move data around faster and faster. He points to Nvidia, which is about to release a chip that processes 1.6 terabytes a second: “At that data rate, all hell breaks loose and the demand for cooling goes up exponentially,” he says. According to Atti, the chips inside servers suck up around 45% of the power in a data center. But cooling those chips now takes almost as much power, around 40%. “For the first time, thermal management is becoming the gate to the expansion of this AI infrastructure,” he says. Phononic’s cooling chips are small thermoelectric devices that can be placed on or near the hardware that needs cooling. Power an LED chip and it emits photons; power a thermoelectric chip and it emits phonons. In short, phononic chips push heat from one surface to another. Squeezed into tight spaces inside and around servers, such chips can detect minute increases in heat and switch on and off to maintain a stable temperature. When they’re on, they push excess heat into a water pipe to be whisked away. Atti says they can also be used to increase the efficiency of existing cooling systems. The faster you can cool water in a data center, the less of it you need. 4/ Cutting costs goes hand in hand with cutting energy use Despite the explosion in AI’s energy use, there’s reason to be optimistic. Sustainability is often an afterthought or a nice-to-have. But with AI, the best way to reduce overall costs is to cut your energy bill. That’s good news, as it should incentivize companies to increase efficiency. “I think we’ve got an alignment between climate sustainability and cost sustainability,” says Verma. ”I think ultimately that will become the big driver that will push the industry to be more energy efficient.” Shim agrees: “It’s just good business, you know?” Companies will be forced to think hard about how and when they use AI, choosing smaller, bespoke options whenever they can, she says: “Just look at the world right now. Spending on technology, like everything else, is going to be even more critical going forward.” Shim thinks the concerns around AI’s energy use are valid. But she points to the rise of the internet and the personal computer boom 25 years ago. As the technology behind those revolutions improved, the energy costs stayed more or less stable even though the number of users skyrocketed, she says. It’s a general rule Shim thinks will apply this time around as well: When tech matures, it gets more efficient. “I think that’s where we are right now with AI,” she says. AI is fast becoming a commodity, which means that market competition will drive prices down. To stay in the game, companies will be looking to cut energy use for the sake of their bottom line if nothing else.  In the end, capitalism may save us after all.  #four #reasons #optimistic #about #ais
    Four reasons to be optimistic about AI’s energy usage
    www.technologyreview.com
    The day after his inauguration in January, President Donald Trump announced Stargate, a $500 billion initiative to build out AI infrastructure, backed by some of the biggest companies in tech. Stargate aims to accelerate the construction of massive data centers and electricity networks across the US to ensure it keeps its edge over China. This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution. The whatever-it-takes approach to the race for worldwide AI dominance was the talk of Davos, says Raquel Urtasun, founder and CEO of the Canadian robotruck startup Waabi, referring to the World Economic Forum’s annual January meeting in Switzerland, which was held the same week as Trump’s announcement. “I’m pretty worried about where the industry is going,” Urtasun says.  She’s not alone. “Dollars are being invested, GPUs are being burned, water is being evaporated—it’s just absolutely the wrong direction,” says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI. But sift through the talk of rocketing costs—and climate impact—and you’ll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock. Here’s what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI. 1/ More efficient models The most obvious place to start is with the models themselves—the way they’re created and the way they’re run. AI models are built by training neural networks on lots and lots of data. Large language models are trained on vast amounts of text, self-driving models are trained on vast amounts of driving data, and so on. But the way such data is collected is often indiscriminate. Large language models are trained on data sets that include text scraped from most of the internet and huge libraries of scanned books. The practice has been to grab everything that’s not nailed down, throw it into the mix, and see what comes out. This approach has certainly worked, but training a model on a massive data set over and over so it can extract relevant patterns by itself is a waste of time and energy. There might be a more efficient way. Children aren’t expected to learn just by reading everything that’s ever been written; they are given a focused curriculum. Urtasun thinks we should do something similar with AI, training models with more curated data tailored to specific tasks. (Waabi trains its robotrucks inside a superrealistic simulation that allows fine-grained control of the virtual data its models are presented with.) It’s not just Waabi. Writer, an AI startup that builds large language models for enterprise customers, claims that its models are cheaper to train and run in part because it trains them using synthetic data. Feeding its models bespoke data sets rather than larger but less curated ones makes the training process quicker (and therefore less expensive). For example, instead of simply downloading Wikipedia, the team at Writer takes individual Wikipedia pages and rewrites their contents in different formats—as a Q&A instead of a block of text, and so on—so that its models can learn more from less. Training is just the start of a model’s life cycle. As models have become bigger, they have become more expensive to run. So-called reasoning models that work through a query step by step before producing a response are especially power-hungry because they compute a series of intermediate subresponses for each response. The price tag of these new capabilities is eye-watering: OpenAI’s o3 reasoning model has been estimated to cost up to $30,000 per task to run.   But this technology is only a few months old and still experimental. Farhadi expects that these costs will soon come down. For example, engineers will figure out how to stop reasoning models from going too far down a dead-end path before they determine it’s not viable. “The first time you do something it’s way more expensive, and then you figure out how to make it smaller and more efficient,” says Farhadi. “It’s a fairly consistent trend in technology.” One way to get performance gains without big jumps in energy consumption is to run inference steps (the computations a model makes to come up with its response) in parallel, he says. Parallel computing underpins much of today’s software, especially large language models (GPUs are parallel by design). Even so, the basic technique could be applied to a wider range of problems. By splitting up a task and running different parts of it at the same time, parallel computing can generate results more quickly. It can also save energy by making more efficient use of available hardware. But it requires clever new algorithms to coordinate the multiple subtasks and pull them together into a single result at the end.  The largest, most powerful models won’t be used all the time, either. There is a lot of talk about small models, versions of large language models that have been distilled into pocket-size packages. In many cases, these more efficient models perform as well as larger ones, especially for specific use cases. As businesses figure out how large language models fit their needs (or not), this trend toward more efficient bespoke models is taking off. You don’t need an all-purpose LLM to manage inventory or to respond to niche customer queries. “There’s going to be a really, really large number of specialized models, not one God-given model that solves everything,” says Farhadi. Christina Shim, chief sustainability officer at IBM, is seeing this trend play out in the way her clients adopt the technology. She works with businesses to make sure they choose the smallest and least power-hungry models possible. “It’s not just the biggest model that will give you a big bang for your buck,” she says. A smaller model that does exactly what you need is a better investment than a larger one that does the same thing: “Let’s not use a sledgehammer to hit a nail.” 2/ More efficient computer chips As the software becomes more streamlined, the hardware it runs on will become more efficient too. There’s a tension at play here: In the short term, chipmakers like Nvidia are racing to develop increasingly powerful chips to meet demand from companies wanting to run increasingly powerful models. But in the long term, this race isn’t sustainable. “The models have gotten so big, even running the inference step now starts to become a big challenge,” says Naveen Verma, cofounder and CEO of the upstart microchip maker EnCharge AI. Companies like Microsoft and OpenAI are losing money running their models inside data centers to meet the demand from millions of people. Smaller models will help. Another option is to move the computing out of the data centers and into people’s own machines. That’s something that Microsoft tried with its Copilot+ PC initiative, in which it marketed a supercharged PC that would let you run an AI model (and cover the energy bills) yourself. It hasn’t taken off, but Verma thinks the push will continue because companies will want to offload as much of the costs of running a model as they can. But getting AI models (even small ones) to run reliably on people’s personal devices will require a step change in the chips that typically power those devices. These chips need to be made even more energy efficient because they need to be able to work with just a battery, says Verma. That’s where EnCharge comes in. Its solution is a new kind of chip that ditches digital computation in favor of something called analog in-memory computing. Instead of representing information with binary 0s and 1s, like the electronics inside conventional, digital computer chips, the electronics inside analog chips can represent information along a range of values in between 0 and 1. In theory, this lets you do more with the same amount of power.  SHIWEN SVEN WANG EnCharge was spun out from Verma’s research lab at Princeton in 2022. “We’ve known for decades that analog compute can be much more efficient—orders of magnitude more efficient—than digital,” says Verma. But analog computers never worked well in practice because they made lots of errors. Verma and his colleagues have discovered a way to do analog computing that’s precise. EnCharge is focusing just on the core computation required by AI today. With support from semiconductor giants like TSMC, the startup is developing hardware that performs high-dimensional matrix multiplication (the basic math behind all deep-learning models) in an analog chip and then passes the result back out to the surrounding digital computer. EnCharge’s hardware is just one of a number of experimental new chip designs on the horizon. IBM and others have been exploring something called neuromorphic computing for years. The idea is to design computers that mimic the brain’s super-efficient processing powers. Another path involves optical chips, which swap out the electrons in a traditional chip for light, again cutting the energy required for computation. None of these designs yet come close to competing with the electronic digital chips made by the likes of Nvidia. But as the demand for efficiency grows, such alternatives will be waiting in the wings.  It is also not just chips that can be made more efficient. A lot of the energy inside computers is spent passing data back and forth. IBM says that it has developed a new kind of optical switch, a device that controls digital traffic, that is 80% more efficient than previous switches.    3/ More efficient cooling in data centers Another huge source of energy demand is the need to manage the waste heat produced by the high-end hardware on which AI models run. Tom Earp, engineering director at the design firm Page, has been building data centers since 2006, including a six-year stint doing so for Meta. Earp looks for efficiencies in everything from the structure of the building to the electrical supply, the cooling systems, and the way data is transferred in and out. For a decade or more, as Moore’s Law tailed off, data-center designs were pretty stable, says Earp. And then everything changed. With the shift to processors like GPUs, and with even newer chip designs on the horizon, it is hard to predict what kind of hardware a new data center will need to house—and thus what energy demands it will have to support—in a few years’ time. But in the short term the safe bet is that chips will continue getting faster and hotter: “What I see is that the people who have to make these choices are planning for a lot of upside in how much power we’re going to need,” says Earp. One thing is clear: The chips that run AI models, such as GPUs, require more power per unit of space than previous types of computer chips. And that has big knock-on implications for the cooling infrastructure inside a data center. “When power goes up, heat goes up,” says Earp. With so many high-powered chips squashed together, air cooling (big fans, in other words) is no longer sufficient. Water has become the go-to coolant because it is better than air at whisking heat away. That’s not great news for local water sources around data centers. But there are ways to make water cooling more efficient. One option is to use water to send the waste heat from a data center to places where it can be used. In Denmark water from data centers has been used to heat homes. In Paris, during the Olympics, it was used to heat swimming pools.   Water can also serve as a type of battery. Energy generated from renewable sources, such as wind turbines or solar panels, can be used to chill water that is stored until it is needed to cool computers later, which reduces the power usage at peak times. But as data centers get hotter, water cooling alone doesn’t cut it, says Tony Atti, CEO of Phononic, a startup that supplies specialist cooling chips. Chipmakers are creating chips that move data around faster and faster. He points to Nvidia, which is about to release a chip that processes 1.6 terabytes a second: “At that data rate, all hell breaks loose and the demand for cooling goes up exponentially,” he says. According to Atti, the chips inside servers suck up around 45% of the power in a data center. But cooling those chips now takes almost as much power, around 40%. “For the first time, thermal management is becoming the gate to the expansion of this AI infrastructure,” he says. Phononic’s cooling chips are small thermoelectric devices that can be placed on or near the hardware that needs cooling. Power an LED chip and it emits photons; power a thermoelectric chip and it emits phonons (which are to vibrational energy—a.k.a. temperature—as photons are to light). In short, phononic chips push heat from one surface to another. Squeezed into tight spaces inside and around servers, such chips can detect minute increases in heat and switch on and off to maintain a stable temperature. When they’re on, they push excess heat into a water pipe to be whisked away. Atti says they can also be used to increase the efficiency of existing cooling systems. The faster you can cool water in a data center, the less of it you need. 4/ Cutting costs goes hand in hand with cutting energy use Despite the explosion in AI’s energy use, there’s reason to be optimistic. Sustainability is often an afterthought or a nice-to-have. But with AI, the best way to reduce overall costs is to cut your energy bill. That’s good news, as it should incentivize companies to increase efficiency. “I think we’ve got an alignment between climate sustainability and cost sustainability,” says Verma. ”I think ultimately that will become the big driver that will push the industry to be more energy efficient.” Shim agrees: “It’s just good business, you know?” Companies will be forced to think hard about how and when they use AI, choosing smaller, bespoke options whenever they can, she says: “Just look at the world right now. Spending on technology, like everything else, is going to be even more critical going forward.” Shim thinks the concerns around AI’s energy use are valid. But she points to the rise of the internet and the personal computer boom 25 years ago. As the technology behind those revolutions improved, the energy costs stayed more or less stable even though the number of users skyrocketed, she says. It’s a general rule Shim thinks will apply this time around as well: When tech matures, it gets more efficient. “I think that’s where we are right now with AI,” she says. AI is fast becoming a commodity, which means that market competition will drive prices down. To stay in the game, companies will be looking to cut energy use for the sake of their bottom line if nothing else.  In the end, capitalism may save us after all. 
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Ventiva’s fanless cooling wants to revolutionize tomorrow’s laptops

    Ventiva’s fanless PC cooling technology is evolving from a curiosity to what appears to be a genuine game-changer: not only is it demonstrating 45W cooling capabilities with two partners, but Ventiva is also claiming that its ICE9 system can cool up to 100 watts of thermal energy as well.
    Dell — the partner with which Ventiva originally worked with — is one of the companies interested in the 45W cooling solution. The other is Compal, a “white box” contract manufacturer that builds PCs for any number of vendors who then claim them as their own.
    Ventiva surfaced late last year, and we sat down with company executives at CES 2025. Rivals like Frore or xMEMS use a vibrating membrane to replicate the actions of a fan, moving cool air over heated elements within a PC and then outside the system. Ventiva essentially ionizes the air, which is pushed away from a charged wire and creates airflow.
    The amount of air moved, and how much cooling is applied, depends on a few factors: the size of the cooling component, how much charge is applied, and how many ICE devices are working together. At CES 2025, however, Ventiva was talking about moving just 25 watts’ worth of thermal energy, enough for the 15W of an Intel Core Ultra “Meteor Lake”-U chip, for example., but not quite enough for the 28W “Arrow Lake” chips or the rival Ryzen AI 300 processors, whose TDPs are also about 28W.

    By pushing up to 40W, Ventiva’s partnerships with Compal and Dell would allow both companies to design laptop reference designs that could accommodate a wider variety of PC processors, including while they were running in excess of their rated TDP in turbo mode. The ICE technology is less than 12mm high, allowing thinner laptops to be made.
    Ventiva is also looking at the future. The company is demonstrating a 100W test laptop at Computex 2025 this week, which it will presumably use to strike even more partnerships.
    “AI-driven laptops are transforming the way we work, create, and play, but their increasing thermal output requires a new level of device heat management,” said Carl Schlachte, chairman, president and chief executive of Ventiva, in a statement. “This is our highest-performing thermal management system to date, enabling laptop OEMs and ODMs to push power to the limit, and stay totally cool, under any workload, from 3D design to AI development to immersive game playing.” 
    While 100 watts of cooling is well below what gaming laptops can consume under full load, there’s certainly a chance that a midrange laptop might be able to use Ventiva’s solution for some sort of gaming application. And boy, wouldn’t a silent gaming laptop — without the need to dunk it in a vat of coolant — be a thing of beauty?
    #ventivas #fanless #cooling #wants #revolutionize
    Ventiva’s fanless cooling wants to revolutionize tomorrow’s laptops
    Ventiva’s fanless PC cooling technology is evolving from a curiosity to what appears to be a genuine game-changer: not only is it demonstrating 45W cooling capabilities with two partners, but Ventiva is also claiming that its ICE9 system can cool up to 100 watts of thermal energy as well. Dell — the partner with which Ventiva originally worked with — is one of the companies interested in the 45W cooling solution. The other is Compal, a “white box” contract manufacturer that builds PCs for any number of vendors who then claim them as their own. Ventiva surfaced late last year, and we sat down with company executives at CES 2025. Rivals like Frore or xMEMS use a vibrating membrane to replicate the actions of a fan, moving cool air over heated elements within a PC and then outside the system. Ventiva essentially ionizes the air, which is pushed away from a charged wire and creates airflow. The amount of air moved, and how much cooling is applied, depends on a few factors: the size of the cooling component, how much charge is applied, and how many ICE devices are working together. At CES 2025, however, Ventiva was talking about moving just 25 watts’ worth of thermal energy, enough for the 15W of an Intel Core Ultra “Meteor Lake”-U chip, for example., but not quite enough for the 28W “Arrow Lake” chips or the rival Ryzen AI 300 processors, whose TDPs are also about 28W. By pushing up to 40W, Ventiva’s partnerships with Compal and Dell would allow both companies to design laptop reference designs that could accommodate a wider variety of PC processors, including while they were running in excess of their rated TDP in turbo mode. The ICE technology is less than 12mm high, allowing thinner laptops to be made. Ventiva is also looking at the future. The company is demonstrating a 100W test laptop at Computex 2025 this week, which it will presumably use to strike even more partnerships. “AI-driven laptops are transforming the way we work, create, and play, but their increasing thermal output requires a new level of device heat management,” said Carl Schlachte, chairman, president and chief executive of Ventiva, in a statement. “This is our highest-performing thermal management system to date, enabling laptop OEMs and ODMs to push power to the limit, and stay totally cool, under any workload, from 3D design to AI development to immersive game playing.”  While 100 watts of cooling is well below what gaming laptops can consume under full load, there’s certainly a chance that a midrange laptop might be able to use Ventiva’s solution for some sort of gaming application. And boy, wouldn’t a silent gaming laptop — without the need to dunk it in a vat of coolant — be a thing of beauty? #ventivas #fanless #cooling #wants #revolutionize
    Ventiva’s fanless cooling wants to revolutionize tomorrow’s laptops
    www.pcworld.com
    Ventiva’s fanless PC cooling technology is evolving from a curiosity to what appears to be a genuine game-changer: not only is it demonstrating 45W cooling capabilities with two partners, but Ventiva is also claiming that its ICE9 system can cool up to 100 watts of thermal energy as well. Dell — the partner with which Ventiva originally worked with — is one of the companies interested in the 45W cooling solution. The other is Compal, a “white box” contract manufacturer that builds PCs for any number of vendors who then claim them as their own. Ventiva surfaced late last year, and we sat down with company executives at CES 2025. Rivals like Frore or xMEMS use a vibrating membrane to replicate the actions of a fan, moving cool air over heated elements within a PC and then outside the system. Ventiva essentially ionizes the air, which is pushed away from a charged wire and creates airflow. The amount of air moved, and how much cooling is applied, depends on a few factors: the size of the cooling component (which Ventiva calls an ICE), how much charge is applied, and how many ICE devices are working together. At CES 2025, however, Ventiva was talking about moving just 25 watts’ worth of thermal energy, enough for the 15W of an Intel Core Ultra “Meteor Lake”-U chip, for example., but not quite enough for the 28W “Arrow Lake” chips or the rival Ryzen AI 300 processors, whose TDPs are also about 28W. By pushing up to 40W, Ventiva’s partnerships with Compal and Dell would allow both companies to design laptop reference designs that could accommodate a wider variety of PC processors, including while they were running in excess of their rated TDP in turbo mode. The ICE technology is less than 12mm high, allowing thinner laptops to be made. Ventiva is also looking at the future. The company is demonstrating a 100W test laptop at Computex 2025 this week, which it will presumably use to strike even more partnerships. “AI-driven laptops are transforming the way we work, create, and play, but their increasing thermal output requires a new level of device heat management,” said Carl Schlachte, chairman, president and chief executive of Ventiva, in a statement. “This is our highest-performing thermal management system to date, enabling laptop OEMs and ODMs to push power to the limit, and stay totally cool, under any workload, from 3D design to AI development to immersive game playing.”  While 100 watts of cooling is well below what gaming laptops can consume under full load, there’s certainly a chance that a midrange laptop might be able to use Ventiva’s solution for some sort of gaming application. And boy, wouldn’t a silent gaming laptop — without the need to dunk it in a vat of coolant — be a thing of beauty?
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Microsoft used AI to invent a safer coolant — and dunked a PC in it

    Microsoft says it used its own agentic reasoning AI model to help develop and synthesize a new immersion fluid for PC cooling—and then confirmed that it worked by dunking a PC motherboard into a vat of it.
    John Link, Microsoft’s principal program manager for product innovation, closed out Microsoft’s Build developer keynote by showing off how Microsoft innovated a new immersion cooling technology using Copilot AI without any PFAS.
    PC boards and server racks can be cooled by air, by water, or by connecting metal heat exchangers with fluid-filled tubing that thermally routes the heat of a processor to the outside world. Immersion cooling is an extreme example of this, which uses electrically non-conductive fluids that surround the entire board. Essentially, the entire board is submerged. Water can’t be used because it would short out the system, so PFAS can be used instead—but PFAS presents environmental and health hazards.
    Link used what Microsoft calls Microsoft Discovery, an agentic research system. Agentic AI is Microsoft’s next big thing, and your one-on-one interactions with Copilot will soon give way to managing individual AIs that autonomously perform specialized tasks.

    Submerged, cooled, and running Forza.YouTube
    According to Microsoft, the model uses both proprietary data as well as external research to try and develop relationships between the data. In Link’s demonstration, it used both a “Knowledge Base” agent as well as a specialized chemistry agent. The example tried to exclude any proposed molecules that would violate the PFAS conditions, and that fell within a certain dielectric range and boiling points.
    You can watch Link’s Microsoft Build 2025 keynote closeout to see what he discovered, but it appears to be a member of the alkene family.
    More to the point, Link said that the discovery was promising enough that Microsoft synthesized enough of it to dunk a motherboard and PC processor inside a container of the stuff, and then ran Forza Motorsport to prove that it worked.
    #microsoft #used #invent #safer #coolant
    Microsoft used AI to invent a safer coolant — and dunked a PC in it
    Microsoft says it used its own agentic reasoning AI model to help develop and synthesize a new immersion fluid for PC cooling—and then confirmed that it worked by dunking a PC motherboard into a vat of it. John Link, Microsoft’s principal program manager for product innovation, closed out Microsoft’s Build developer keynote by showing off how Microsoft innovated a new immersion cooling technology using Copilot AI without any PFAS. PC boards and server racks can be cooled by air, by water, or by connecting metal heat exchangers with fluid-filled tubing that thermally routes the heat of a processor to the outside world. Immersion cooling is an extreme example of this, which uses electrically non-conductive fluids that surround the entire board. Essentially, the entire board is submerged. Water can’t be used because it would short out the system, so PFAS can be used instead—but PFAS presents environmental and health hazards. Link used what Microsoft calls Microsoft Discovery, an agentic research system. Agentic AI is Microsoft’s next big thing, and your one-on-one interactions with Copilot will soon give way to managing individual AIs that autonomously perform specialized tasks. Submerged, cooled, and running Forza.YouTube According to Microsoft, the model uses both proprietary data as well as external research to try and develop relationships between the data. In Link’s demonstration, it used both a “Knowledge Base” agent as well as a specialized chemistry agent. The example tried to exclude any proposed molecules that would violate the PFAS conditions, and that fell within a certain dielectric range and boiling points. You can watch Link’s Microsoft Build 2025 keynote closeout to see what he discovered, but it appears to be a member of the alkene family. More to the point, Link said that the discovery was promising enough that Microsoft synthesized enough of it to dunk a motherboard and PC processor inside a container of the stuff, and then ran Forza Motorsport to prove that it worked. #microsoft #used #invent #safer #coolant
    Microsoft used AI to invent a safer coolant — and dunked a PC in it
    www.pcworld.com
    Microsoft says it used its own agentic reasoning AI model to help develop and synthesize a new immersion fluid for PC cooling—and then confirmed that it worked by dunking a PC motherboard into a vat of it. John Link, Microsoft’s principal program manager for product innovation, closed out Microsoft’s Build developer keynote by showing off how Microsoft innovated a new immersion cooling technology using Copilot AI without any PFAS (or what’s known as “forever chemicals”). PC boards and server racks can be cooled by air, by water, or by connecting metal heat exchangers with fluid-filled tubing that thermally routes the heat of a processor to the outside world. Immersion cooling is an extreme example of this, which uses electrically non-conductive fluids that surround the entire board. Essentially, the entire board is submerged. Water can’t be used because it would short out the system, so PFAS can be used instead—but PFAS presents environmental and health hazards. Link used what Microsoft calls Microsoft Discovery, an agentic research system. Agentic AI is Microsoft’s next big thing, and your one-on-one interactions with Copilot will soon give way to managing individual AIs that autonomously perform specialized tasks. Submerged, cooled, and running Forza.YouTube According to Microsoft, the model uses both proprietary data as well as external research to try and develop relationships between the data. In Link’s demonstration, it used both a “Knowledge Base” agent as well as a specialized chemistry agent. The example tried to exclude any proposed molecules that would violate the PFAS conditions, and that fell within a certain dielectric range and boiling points. You can watch Link’s Microsoft Build 2025 keynote closeout to see what he discovered, but it appears to be a member of the alkene family. More to the point, Link said that the discovery was promising enough that Microsoft synthesized enough of it to dunk a motherboard and PC processor inside a container of the stuff, and then ran Forza Motorsport to prove that it worked.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • NVIDIA and Microsoft Accelerate Agentic AI Innovation, From Cloud to PC

    Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC.
    At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.
    Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries.
    In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods.
    Advancing Agentic AI With NVIDIA GB200 Deployments at Scale
    Microsoft is rapidly deploying tens of thousands of NVIDIA GB200 NVL72 rack-scale systems across its Azure data centers, boosting both performance and efficiency.
    Azure’s ND GB200 v6 virtual machines — built on a rack-scale architecture with up to 72 NVIDIA Blackwell GPUs per rack and advanced liquid cooling — deliver up to 35x more inference throughput compared with previous ND H100 v5 VMs accelerated by eight NVIDIA H100 GPUs, setting a new benchmark for AI workloads.
    These innovations are underpinned by custom server designs, high-speed NVIDIA NVLink interconnects and NVIDIA Quantum InfiniBand networking — enabling seamless scaling to tens of thousands of Blackwell GPUs for demanding generative and agentic AI applications.
    Microsoft chairman and CEO Satya Nadella and NVIDIA founder and CEO Jensen Huang also highlighted how Microsoft and NVIDIA’s collaboration is compounding performance gains through continuous software optimizations across NVIDIA architectures on Azure. This approach maximizes developer productivity, lowers total cost of ownership and accelerates all workloads, including AI and data processing  — all while driving greater efficiency per dollar and per watt for customers.
    NVIDIA AI Reasoning and Healthcare Microservices on Azure AI Foundry
    Building on the NIM integration in Azure AI Foundry, announced at NVIDIA GTC, Microsoft and NVIDIA are expanding the platform with the NVIDIA Llama Nemotron family of open reasoning models and NVIDIA BioNeMo NIM microservices, which deliver enterprise-grade, containerized inferencing for complex decision-making and domain-specific AI workloads.
    Developers can now access optimized NIM microservices for advanced reasoning in Azure AI Foundry. These include the NVIDIA Llama Nemotron Super and Nano models, which offer advanced multistep reasoning, coding and agentic capabilities, delivering up to 20% higher accuracy and 5x faster inference than previous models.
    Healthcare-focused BioNeMo NIM microservices like ProteinMPNN, RFDiffusion and OpenFold2 address critical applications in digital biology, drug discovery and medical imaging, enabling researchers and clinicians to accelerate protein science, molecular modeling and genomic analysis for improved patient care and faster scientific innovation.
    This expanded integration empowers organizations to rapidly deploy high-performance AI agents, connecting to these models and other specialized healthcare solutions with robust reliability and simplified scaling.
    Accelerating Generative AI on Windows 11 With RTX AI PCs
    Generative AI is reshaping PC software with entirely new experiences — from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs make it easy to get it started with experimenting with generative AI and unlock greater performance on Windows 11.
    At Microsoft Build, NVIDIA and Microsoft are unveiling an AI inferencing stack to simplify development and boost inference performance for Windows 11 PCs.
    NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to the more than 100 million RTX AI PCs.
    Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. TensorRT for RTX is available in the Windows ML preview starting today, and will be available as a standalone software development kit from NVIDIA Developer in June.
    Learn more about how TensorRT for RTX and Windows ML are streamlining software development. Explore new NIM microservices and AI Blueprints for RTX, and RTX-powered updates from Autodesk, Bilibili, Chaos, LM Studio and Topaz in the RTX AI PC blog, and join the community discussion on Discord.
    Explore sessions, hands-on workshops and live demos at Microsoft Build to learn how Microsoft and NVIDIA are accelerating agentic AI.
    #nvidia #microsoft #accelerate #agentic #innovation
    NVIDIA and Microsoft Accelerate Agentic AI Innovation, From Cloud to PC
    Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists. Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods. Advancing Agentic AI With NVIDIA GB200 Deployments at Scale Microsoft is rapidly deploying tens of thousands of NVIDIA GB200 NVL72 rack-scale systems across its Azure data centers, boosting both performance and efficiency. Azure’s ND GB200 v6 virtual machines — built on a rack-scale architecture with up to 72 NVIDIA Blackwell GPUs per rack and advanced liquid cooling — deliver up to 35x more inference throughput compared with previous ND H100 v5 VMs accelerated by eight NVIDIA H100 GPUs, setting a new benchmark for AI workloads. These innovations are underpinned by custom server designs, high-speed NVIDIA NVLink interconnects and NVIDIA Quantum InfiniBand networking — enabling seamless scaling to tens of thousands of Blackwell GPUs for demanding generative and agentic AI applications. Microsoft chairman and CEO Satya Nadella and NVIDIA founder and CEO Jensen Huang also highlighted how Microsoft and NVIDIA’s collaboration is compounding performance gains through continuous software optimizations across NVIDIA architectures on Azure. This approach maximizes developer productivity, lowers total cost of ownership and accelerates all workloads, including AI and data processing  — all while driving greater efficiency per dollar and per watt for customers. NVIDIA AI Reasoning and Healthcare Microservices on Azure AI Foundry Building on the NIM integration in Azure AI Foundry, announced at NVIDIA GTC, Microsoft and NVIDIA are expanding the platform with the NVIDIA Llama Nemotron family of open reasoning models and NVIDIA BioNeMo NIM microservices, which deliver enterprise-grade, containerized inferencing for complex decision-making and domain-specific AI workloads. Developers can now access optimized NIM microservices for advanced reasoning in Azure AI Foundry. These include the NVIDIA Llama Nemotron Super and Nano models, which offer advanced multistep reasoning, coding and agentic capabilities, delivering up to 20% higher accuracy and 5x faster inference than previous models. Healthcare-focused BioNeMo NIM microservices like ProteinMPNN, RFDiffusion and OpenFold2 address critical applications in digital biology, drug discovery and medical imaging, enabling researchers and clinicians to accelerate protein science, molecular modeling and genomic analysis for improved patient care and faster scientific innovation. This expanded integration empowers organizations to rapidly deploy high-performance AI agents, connecting to these models and other specialized healthcare solutions with robust reliability and simplified scaling. Accelerating Generative AI on Windows 11 With RTX AI PCs Generative AI is reshaping PC software with entirely new experiences — from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs make it easy to get it started with experimenting with generative AI and unlock greater performance on Windows 11. At Microsoft Build, NVIDIA and Microsoft are unveiling an AI inferencing stack to simplify development and boost inference performance for Windows 11 PCs. NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to the more than 100 million RTX AI PCs. Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. TensorRT for RTX is available in the Windows ML preview starting today, and will be available as a standalone software development kit from NVIDIA Developer in June. Learn more about how TensorRT for RTX and Windows ML are streamlining software development. Explore new NIM microservices and AI Blueprints for RTX, and RTX-powered updates from Autodesk, Bilibili, Chaos, LM Studio and Topaz in the RTX AI PC blog, and join the community discussion on Discord. Explore sessions, hands-on workshops and live demos at Microsoft Build to learn how Microsoft and NVIDIA are accelerating agentic AI. #nvidia #microsoft #accelerate #agentic #innovation
    NVIDIA and Microsoft Accelerate Agentic AI Innovation, From Cloud to PC
    blogs.nvidia.com
    Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists. Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods. Advancing Agentic AI With NVIDIA GB200 Deployments at Scale Microsoft is rapidly deploying tens of thousands of NVIDIA GB200 NVL72 rack-scale systems across its Azure data centers, boosting both performance and efficiency. Azure’s ND GB200 v6 virtual machines — built on a rack-scale architecture with up to 72 NVIDIA Blackwell GPUs per rack and advanced liquid cooling — deliver up to 35x more inference throughput compared with previous ND H100 v5 VMs accelerated by eight NVIDIA H100 GPUs, setting a new benchmark for AI workloads. These innovations are underpinned by custom server designs, high-speed NVIDIA NVLink interconnects and NVIDIA Quantum InfiniBand networking — enabling seamless scaling to tens of thousands of Blackwell GPUs for demanding generative and agentic AI applications. Microsoft chairman and CEO Satya Nadella and NVIDIA founder and CEO Jensen Huang also highlighted how Microsoft and NVIDIA’s collaboration is compounding performance gains through continuous software optimizations across NVIDIA architectures on Azure. This approach maximizes developer productivity, lowers total cost of ownership and accelerates all workloads, including AI and data processing  — all while driving greater efficiency per dollar and per watt for customers. NVIDIA AI Reasoning and Healthcare Microservices on Azure AI Foundry Building on the NIM integration in Azure AI Foundry, announced at NVIDIA GTC, Microsoft and NVIDIA are expanding the platform with the NVIDIA Llama Nemotron family of open reasoning models and NVIDIA BioNeMo NIM microservices, which deliver enterprise-grade, containerized inferencing for complex decision-making and domain-specific AI workloads. Developers can now access optimized NIM microservices for advanced reasoning in Azure AI Foundry. These include the NVIDIA Llama Nemotron Super and Nano models, which offer advanced multistep reasoning, coding and agentic capabilities, delivering up to 20% higher accuracy and 5x faster inference than previous models. Healthcare-focused BioNeMo NIM microservices like ProteinMPNN, RFDiffusion and OpenFold2 address critical applications in digital biology, drug discovery and medical imaging, enabling researchers and clinicians to accelerate protein science, molecular modeling and genomic analysis for improved patient care and faster scientific innovation. This expanded integration empowers organizations to rapidly deploy high-performance AI agents, connecting to these models and other specialized healthcare solutions with robust reliability and simplified scaling. Accelerating Generative AI on Windows 11 With RTX AI PCs Generative AI is reshaping PC software with entirely new experiences — from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs make it easy to get it started with experimenting with generative AI and unlock greater performance on Windows 11. At Microsoft Build, NVIDIA and Microsoft are unveiling an AI inferencing stack to simplify development and boost inference performance for Windows 11 PCs. NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to the more than 100 million RTX AI PCs. Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. TensorRT for RTX is available in the Windows ML preview starting today, and will be available as a standalone software development kit from NVIDIA Developer in June. Learn more about how TensorRT for RTX and Windows ML are streamlining software development. Explore new NIM microservices and AI Blueprints for RTX, and RTX-powered updates from Autodesk, Bilibili, Chaos, LM Studio and Topaz in the RTX AI PC blog, and join the community discussion on Discord. Explore sessions, hands-on workshops and live demos at Microsoft Build to learn how Microsoft and NVIDIA are accelerating agentic AI.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again

    PC Builds * Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: +Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid for Origin PC’s 5090-powered Genesis when it launched, or after taxes. Today, a similar build has a list price of Markup is to over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost – and that’s after taxes and a discount off the initial sticker price of We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was. It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior, chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFICPURyzen 7 9800X3DGraphics CardNVIDIA RTX 5090 Founders EditionRAMCorsair Vengeance DDR5-6000SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSDCustom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, FittingsFans12x Corsair iCUE LINK RX120 120mm FanCaseCorsair 7000D AirflowPSUCorsair RM1200x SHIFT 80+ Gold PSURGB/Fan Controller2x Corsair iCUE Link System HubOperating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025We’ll price it out based on the original, pre-tariff build before taxes and with a 10% off promo. Keep in mind that the new price is to depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost making for a markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around -markup at the high end. In order for this system to be “worth” more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to or more, so the markup with that instead would be if we assume a 5090 costs That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of you can spend another and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interestingsystem that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55Cto not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was after taxes, we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables, an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs after the discount, off the new sticker price of That’s an increase of over making the premium over current DIY pricing roughly -Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed.
    #disaster #prebuilt #corsair #ampamp #origin
    $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again
    PC Builds * Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: +Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid for Origin PC’s 5090-powered Genesis when it launched, or after taxes. Today, a similar build has a list price of Markup is to over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost – and that’s after taxes and a discount off the initial sticker price of We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was. It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior, chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFICPURyzen 7 9800X3DGraphics CardNVIDIA RTX 5090 Founders EditionRAMCorsair Vengeance DDR5-6000SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSDCustom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, FittingsFans12x Corsair iCUE LINK RX120 120mm FanCaseCorsair 7000D AirflowPSUCorsair RM1200x SHIFT 80+ Gold PSURGB/Fan Controller2x Corsair iCUE Link System HubOperating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025We’ll price it out based on the original, pre-tariff build before taxes and with a 10% off promo. Keep in mind that the new price is to depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost making for a markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around -markup at the high end. In order for this system to be “worth” more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to or more, so the markup with that instead would be if we assume a 5090 costs That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of you can spend another and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interestingsystem that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55Cto not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was after taxes, we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables, an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs after the discount, off the new sticker price of That’s an increase of over making the premium over current DIY pricing roughly -Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed. #disaster #prebuilt #corsair #ampamp #origin
    $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again
    gamersnexus.net
    PC Builds $8000* Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: $6,050+Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid $6,050 for Origin PC’s 5090-powered Genesis when it launched, or $6,500 after taxes. Today, a similar build has a list price of $8,396. Markup is $1,700 to $2,500 over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost $6,488 – and that’s after taxes and a $672 discount off the initial sticker price of $6,722. We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was (and continues to be). It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior (which is the opposite of the system we’re reviewing today), chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFI$190CPURyzen 7 9800X3D$480Graphics CardNVIDIA RTX 5090 Founders Edition$2,000RAMCorsair Vengeance DDR5-6000 (2x16GB)$93SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSD$70Custom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, Fittings$712Fans12x Corsair iCUE LINK RX120 120mm Fan$360CaseCorsair 7000D Airflow$240PSUCorsair RM1200x SHIFT 80+ Gold PSU$230RGB/Fan Controller2x Corsair iCUE Link System Hub$118Operating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025$4,493We’ll price it out based on the original, pre-tariff $6,050 build before taxes and with a 10% off promo. Keep in mind that the new price is $7,500 to $8,400, depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost $4,493, making for a $1,557 markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around $500-$800 markup at the high end. In order for this system to be “worth” $1,500 more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for $2,000 is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to $2,800 or more, so the markup with that instead would be $777 if we assume a 5090 costs $2,800. That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a $6,500 computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of $8,400, you can spend another $200 and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interesting (if expensive) system that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55C (at the most) to not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was $6,500 after taxes (at the time), we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables (that we’re happy to see included), an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs $7,557 after the discount, off the new sticker price of $8,396. That’s an increase of over $1,000, making the premium over current DIY pricing roughly $1,700-$2,500.Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At $8,400 sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of $2,800, is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Donkervoort integrates Conflux 3D printed air coolers in P24 RS supercar

    Dutch supercar manufacturer Donkervoort Automobielen has teamed up with Australian thermal technology specialist Conflux Technology to develop 3D printed water-charge air coolersfor the upcoming P24 RS model, marking a milestone in the application of Formula 1-grade additive manufacturing for road-legal vehicles.
    The collaboration, detailed in the latest “Living the Drive: Engineering Chapter” from Donkervoort, centers around an ultra-lightweight, compact thermal management system developed using additive manufacturing. The new liquid-to-air WCAC units weigh just 1.4 kg each, compared to the 16 kg of traditional air-to-air systems, delivering enhanced throttle response, improved packaging, and a significant reduction in engine bay volume.
    Conflux’ custom water-charge air coolersprovide sharper throttle response, improved packaging, and reduced weight. Image via Donkervoort Automobielen.

    “We challenged ourselves to find the best way to keep intake air cold, and Conflux delivered,” said Denis Donkervoort, Managing Director at Donkervoort. “We gave Conflux our exact specifications, and they delivered a solution so effective, we could even downsize it from the original prototype.”
    Each Conflux air cooler is custom 3D printed in aluminium alloy with tailored fin geometry, density, and dimensions to fit directly between the PTC engine’s turbochargers and throttle bodies. The units are supported by a thin-wall radiator system requiring less coolant and surface area than conventional radiators.
    Michael Fuller, Founder of Conflux, added: “This is Formula 1 cooling technology, scaled for the road. Collaborations like this show how additive manufacturing can deliver high-performance solutions in limited-production automotive environments.”
    By relocating the WCACs into the engine bay and shortening the inlet tract, the system provides faster air delivery to the combustion chamber, thereby boosting engine efficiency and driver responsiveness. Combined with Van der Lee’s billet turbochargers, this thermal innovation is a core element of Donkervoort’s evolution of its lightweight PTC engine platform.
    Daniel France, Conflux Business Development Lead. Image via Donkervoort Automobielen.

    Additive manufacturing reshapes thermal systems across high-performance sectors
    This announcement follows Conflux Technology’s broader push into international markets and automotive applications. In April 2025, the company launched a UK hub to support European customers and expand production of its 3D printed heat exchangers. Conflux is among a growing number of firms leveraging additive manufacturing to rethink thermal systems, recent research has shown that 3D printed condensers can outperform traditional designs, underscoring the performance benefits of AM-enabled cooling solutions.
    Other developments include Conflux’s partnership with Rocket Factory Augsburg to integrate 3D-printed heat exchangers into orbital rockets and its release of high-performance cartridge-style heat exchanger designed for fluid control systems in automotive and industrial environments.Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us onLinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.
    Help us shape the future of 3D printing industry news with our2025 reader survey.
    Featured image shows the Donkervoort P24 RS air coolers sitting in the engine bay. Photo via Donkervoort Automobielen.
    #donkervoort #integrates #conflux #printed #air
    Donkervoort integrates Conflux 3D printed air coolers in P24 RS supercar
    Dutch supercar manufacturer Donkervoort Automobielen has teamed up with Australian thermal technology specialist Conflux Technology to develop 3D printed water-charge air coolersfor the upcoming P24 RS model, marking a milestone in the application of Formula 1-grade additive manufacturing for road-legal vehicles. The collaboration, detailed in the latest “Living the Drive: Engineering Chapter” from Donkervoort, centers around an ultra-lightweight, compact thermal management system developed using additive manufacturing. The new liquid-to-air WCAC units weigh just 1.4 kg each, compared to the 16 kg of traditional air-to-air systems, delivering enhanced throttle response, improved packaging, and a significant reduction in engine bay volume. Conflux’ custom water-charge air coolersprovide sharper throttle response, improved packaging, and reduced weight. Image via Donkervoort Automobielen. “We challenged ourselves to find the best way to keep intake air cold, and Conflux delivered,” said Denis Donkervoort, Managing Director at Donkervoort. “We gave Conflux our exact specifications, and they delivered a solution so effective, we could even downsize it from the original prototype.” Each Conflux air cooler is custom 3D printed in aluminium alloy with tailored fin geometry, density, and dimensions to fit directly between the PTC engine’s turbochargers and throttle bodies. The units are supported by a thin-wall radiator system requiring less coolant and surface area than conventional radiators. Michael Fuller, Founder of Conflux, added: “This is Formula 1 cooling technology, scaled for the road. Collaborations like this show how additive manufacturing can deliver high-performance solutions in limited-production automotive environments.” By relocating the WCACs into the engine bay and shortening the inlet tract, the system provides faster air delivery to the combustion chamber, thereby boosting engine efficiency and driver responsiveness. Combined with Van der Lee’s billet turbochargers, this thermal innovation is a core element of Donkervoort’s evolution of its lightweight PTC engine platform. Daniel France, Conflux Business Development Lead. Image via Donkervoort Automobielen. Additive manufacturing reshapes thermal systems across high-performance sectors This announcement follows Conflux Technology’s broader push into international markets and automotive applications. In April 2025, the company launched a UK hub to support European customers and expand production of its 3D printed heat exchangers. Conflux is among a growing number of firms leveraging additive manufacturing to rethink thermal systems, recent research has shown that 3D printed condensers can outperform traditional designs, underscoring the performance benefits of AM-enabled cooling solutions. Other developments include Conflux’s partnership with Rocket Factory Augsburg to integrate 3D-printed heat exchangers into orbital rockets and its release of high-performance cartridge-style heat exchanger designed for fluid control systems in automotive and industrial environments.Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem. Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows the Donkervoort P24 RS air coolers sitting in the engine bay. Photo via Donkervoort Automobielen. #donkervoort #integrates #conflux #printed #air
    Donkervoort integrates Conflux 3D printed air coolers in P24 RS supercar
    3dprintingindustry.com
    Dutch supercar manufacturer Donkervoort Automobielen has teamed up with Australian thermal technology specialist Conflux Technology to develop 3D printed water-charge air coolers (WCAC) for the upcoming P24 RS model, marking a milestone in the application of Formula 1-grade additive manufacturing for road-legal vehicles. The collaboration, detailed in the latest “Living the Drive: Engineering Chapter” from Donkervoort, centers around an ultra-lightweight, compact thermal management system developed using additive manufacturing. The new liquid-to-air WCAC units weigh just 1.4 kg each, compared to the 16 kg of traditional air-to-air systems, delivering enhanced throttle response, improved packaging, and a significant reduction in engine bay volume. Conflux’ custom water-charge air coolers (WCAC) provide sharper throttle response, improved packaging, and reduced weight. Image via Donkervoort Automobielen. “We challenged ourselves to find the best way to keep intake air cold, and Conflux delivered,” said Denis Donkervoort, Managing Director at Donkervoort. “We gave Conflux our exact specifications, and they delivered a solution so effective, we could even downsize it from the original prototype.” Each Conflux air cooler is custom 3D printed in aluminium alloy with tailored fin geometry, density, and dimensions to fit directly between the PTC engine’s turbochargers and throttle bodies. The units are supported by a thin-wall radiator system requiring less coolant and surface area than conventional radiators. Michael Fuller, Founder of Conflux, added: “This is Formula 1 cooling technology, scaled for the road. Collaborations like this show how additive manufacturing can deliver high-performance solutions in limited-production automotive environments.” By relocating the WCACs into the engine bay and shortening the inlet tract, the system provides faster air delivery to the combustion chamber, thereby boosting engine efficiency and driver responsiveness. Combined with Van der Lee’s billet turbochargers, this thermal innovation is a core element of Donkervoort’s evolution of its lightweight PTC engine platform. Daniel France, Conflux Business Development Lead. Image via Donkervoort Automobielen. Additive manufacturing reshapes thermal systems across high-performance sectors This announcement follows Conflux Technology’s broader push into international markets and automotive applications. In April 2025, the company launched a UK hub to support European customers and expand production of its 3D printed heat exchangers. Conflux is among a growing number of firms leveraging additive manufacturing to rethink thermal systems, recent research has shown that 3D printed condensers can outperform traditional designs, underscoring the performance benefits of AM-enabled cooling solutions. Other developments include Conflux’s partnership with Rocket Factory Augsburg to integrate 3D-printed heat exchangers into orbital rockets and its release of high-performance cartridge-style heat exchanger designed for fluid control systems in automotive and industrial environments.Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem. Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows the Donkervoort P24 RS air coolers sitting in the engine bay. Photo via Donkervoort Automobielen.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
Páginas impulsionada
CGShares https://cgshares.com