• Embrace the challenge of learning a new language! It's not just about words—it's about expanding your mind and spirit! As we dive into the fascinating world of estimating Celsius and understanding mental metrics, remember that every little step counts. Don't let the struggle of translation hold you back; instead, let it propel you forward! Remember, every mistake is a stepping stone to mastery! Keep your spirits high and your determination stronger! You’ve got this!

    #LanguageLearning #PositiveVibes #Motivation #Celsius #MentalMetrics
    🌟 Embrace the challenge of learning a new language! 🌍 It's not just about words—it's about expanding your mind and spirit! 💪✨ As we dive into the fascinating world of estimating Celsius and understanding mental metrics, remember that every little step counts. Don't let the struggle of translation hold you back; instead, let it propel you forward! Remember, every mistake is a stepping stone to mastery! Keep your spirits high and your determination stronger! You’ve got this! 🚀💖 #LanguageLearning #PositiveVibes #Motivation #Celsius #MentalMetrics
    For Americans Only: Estimating Celsius and Other Mental Metrics
    hackaday.com
    I know many computer languages, but I’ve struggled all my life to learn a second human language. One of my problems is that I can’t stop trying to translate in …read more
    1 Comments ·0 Shares ·0 Reviews
  • Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained

    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    #could #iran #have #been #close
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month. #could #iran #have #been #close
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    www.scientificamerican.com
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb. (Iran has denied that it has been pursuing nuclear weapons development.)But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy” (meaning it has more neutrons in its nucleus). Only about 0.7 percent is uranium 235 (U-235), a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream. (Neutrons are the neutral subatomic particle in an atom’s nucleus that adds to their mass.) Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsius (132.8 degrees Fahrenheit) it is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    0 Comments ·0 Shares ·0 Reviews
  • Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth

    Key Takeaways

    Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy.
    Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips.
    The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies.

    The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided.
    The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing.
    Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research.

    Itis the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang

    Dell Going All in on AI
    This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses. 
    This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software.

    Image Credit – Dell
    For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services.
    The demand for Dell’s AI servers has also increased, reaching B in the first quarter of 2025, with a total backlog of B, which suggests a strong future demand and order book. The company has set a bold profit forecast between B and B, as against the analysts’ prediction of $ 25.05 B.
    With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries.
    Focus on Energy Efficiency
    Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028. 

    Image Credit – DIGITIMES Asian
    The data storage industry currently produces only 1-2 zettabytesof storage annually, which is much lower than what would be required in the next 4-5 years.
    At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well. 
    Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency.

    Direct Liquid Coolingincreased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%.
    Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs.
    Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements.
    Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken.

    As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hoursby 2030. For comparison, this is more than the total electricity consumption of Japan today.
    The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals.
    Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade.
    The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #dell #nvidia #power #next #generation
    Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth
    Key Takeaways Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy. Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips. The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies. The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided. The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing. Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research. Itis the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang Dell Going All in on AI This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses.  This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software. Image Credit – Dell For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services. The demand for Dell’s AI servers has also increased, reaching B in the first quarter of 2025, with a total backlog of B, which suggests a strong future demand and order book. The company has set a bold profit forecast between B and B, as against the analysts’ prediction of $ 25.05 B. With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries. Focus on Energy Efficiency Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028.  Image Credit – DIGITIMES Asian The data storage industry currently produces only 1-2 zettabytesof storage annually, which is much lower than what would be required in the next 4-5 years. At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well.  Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency. Direct Liquid Coolingincreased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%. Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs. Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements. Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken. As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hoursby 2030. For comparison, this is more than the total electricity consumption of Japan today. The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals. Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade. The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #dell #nvidia #power #next #generation
    Dell and Nvidia to Power the Next Generation of Supercomputers: A Move Towards Sustainable AI Growth
    techreport.com
    Key Takeaways Dell and Nvidia will together provide architecture for the next set of supercomputers, named Doudna, for the US Department of Energy. Dell will focus more on sustainable hardware and cooling systems, while Nvidia will provide its AI architecture, including the Vera Rubin AI chips. The US Department of Energy wants to focus more on the sustainable development of AI, hence the choice of environment-conscious companies. The US Department of Energy said that Dell’s next batch of supercomputers will be delivered with Nvidia’s ‘Vera Rubin’ AI chips, marking the beginning of a new era of AI dominance in research. The said systems are expected to be 10x faster than the current batch of supercomputers, which HP provided. The supercomputer will be named ‘Doudna,’ after Jennifer Doudna, a Nobel Prize winner who made key contributions in CRISPR gene-editing. Supercomputers have been instrumental in key scientific discoveries in the last few decades and also played a big role in the design and maintenance of the U.S. nuclear weapons arsenal. And now, with the introduction of artificial intelligence, we’re heading towards a new decade of faster and more efficient scientific research. It (supercomputers) is the foundation of scientific discovery for our country. It is also a foundation for economic and technological leadership. And with that, national security – Nvidia CEO, Jensen Huang Dell Going All in on AI This isn’t the first time Dell and Nvidia have come together to develop newer AI solutions. Back in March 2024, Dell announced the Dell AI Factory with NVIDIA, an end-to-end enterprise AI solution designed for businesses.  This joint venture used Dell’s infrastructure, such as servers, storage, and networking, combined with NVIDIA’s AI architecture and technologies such as GPUs, DPUs, and AI software. Image Credit – Dell For instance, the Dell PowerEdge server uses NVIDIA’s full AI stack to provide enterprises with solutions required for a wide range of AI applications, including speech recognition, cybersecurity, recommendation systems, and language-based services. The demand for Dell’s AI servers has also increased, reaching $12.1B in the first quarter of 2025, with a total backlog of $14.4B, which suggests a strong future demand and order book. The company has set a bold profit forecast between $28.5B and $29.5B, as against the analysts’ prediction of $ 25.05 B. With Doudna, Dell is well-positioned to lead the next generation of supercomputers in AI research, invention, and discoveries. Focus on Energy Efficiency Seagate has warned about the unprecedented increase in demand for AI data storage in the coming few years, which is a significant challenge to the sustainability of AI data centers. Global data volume is expected to increase threefold by 2028.  Image Credit – DIGITIMES Asian The data storage industry currently produces only 1-2 zettabytes (1 zettabyte equals 1 trillion gigabytes) of storage annually, which is much lower than what would be required in the next 4-5 years. At the same time, Goldman Sachs predicts that power requirements will also go up by 165% by 2030 due to increasing demand for AI data centers. This calls for a more sustainable approach for the supercomputing industry as well.  Dell will use its proprietary technologies, such as Direct Liquid Cooling, the PowerCool eRDHx, and Smart Flow design in the Doudna, ensuring energy efficiency. Direct Liquid Cooling (DLC) increased computing density by supporting more cores per rack, which reduces cooling costs by as much as 45%. Dell’s PowerCool eRDHx is a self-contained airflow design that can capture 100% of the heat generated by IT systems. This reduces the dependency on expensive chillers, as eRDHx can work in usual temperatures of 32 and 36 degrees Celsius, leading to 60% savings in cooling energy costs. Lastly, the Dell Smart Flow design improves airflow within IT components and reduces the fan power by 52%. This leads to better performance with fewer cooling requirements. Besides this, Dell plans to incorporate Leak Sense Technology. If a coolant leak occurs, the system’s leak sensor will log an alert in the iDRAC system so that swift action can be taken. As per a report titled ‘Energy and AI’ by the IEA, the data center electricity demand will increase to 945 terawatt-hours (TWh) by 2030. For comparison, this is more than the total electricity consumption of Japan today. The US alone will consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals. Therefore, the need to develop sustainable AI data centers and supercomputers cannot be highlighted enough. Dell’s technology-focused, sustainable approach can be a pivotal point in how efficiently we use AI in the next decade. The US Department of Energy’s choice of Dell also seems to be a conscious move to shift towards companies that give importance to sustainability and can vouch for the long-term viability of research-intensive AI setups. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Comments ·0 Shares ·0 Reviews
  • The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be

    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting.That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsiusof warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article
    #carbon #removal #industry #already #lagging
    The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be
    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting.That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsiusof warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article #carbon #removal #industry #already #lagging
    The Carbon Removal Industry Is Already Lagging Behind Where It Needs to Be
    futurism.com
    It may be time to suck it up — and we don't just mean the carbon in the atmosphere. No, we're talking about reckoning with the possibility that our attempts at capturing the greenhouse gas to stave off climate disaster are already hopelessly behind schedule, New Scientist reports, if they're not in vain entirely.To illustrate, here're some simple numbers. The CO2 removal industry expects to hit a milestone of removing one million metric tons of CO2 this year. And companies across the globe have bought carbon credits to remove 27 million more, according to data from CDR.fyi cited in the reporting (more on these carbon credit schemes in a moment).That sounds like a lot, but it really isn't. As New Scientist notes, the Intergovernmental Panel on Climate Change — the leading authority on these issues — concluded in a 2022 report that we need to be removing up to 16 billion tons of carbon, not millions, each year to keep the rise in global temperature from exceeding 1.5 degrees Celsius (2.7 degrees Fahrenheit) of warming by the middle of the century, past which the most drastic effects of climate change are believed to be irreversible."It's not scaling up as fast as it would need to if we are going to reach multiple gigatons by 2050," Robert Höglund at Marginal Carbon, a climate consultancy based in Sweden, told the magazine. Carbon capture is not the be-all and end-all. The fact remains that humanity needs to drastically reduce its emissions, which probably means reorganizing society — or at least its energy production and consumption — as we know it. Simply removing the CO2 that's already there is more like a band-aid that buys us a little time; eventually, we'll need to rip it off.For these reasons, some critics fear that carbon capture — and even more drastic interventions, like attempting to dim the Sun — could distract from the climate change's systemic causes. But there's a lot of enthusiasm for the approach all the same, both from scientists and investors. The IPCC acknowledged in its 2022 report that carbon removal was "unavoidable" — as in, essential to meeting climate targets.One popular method of carbon removal is called direct air capture, which involves sucking the carbon straight from the air using massive industrial facilities. A more circuitous approach that's gaining steam involves extracting CO2 out of the ocean, freeing up room for the world's largest carbon sink to passively absorb even more of the greenhouse gas. All of these initiatives, though, are basically just getting off the ground. And the corporate investment, which once promised billions of dollars in cash, seems to be cooling. More than 90 percent of all carbon removal credits sold this year were bought by a single company, Microsoft, New Scientist notes, probably to gloss over its egregious energy bill it's accrued from building loads of AI datacenters.This also touches on the fact that the practice of buying carbon credits can be used as a means of corporate greenwashing. By paying to another firm to "certify" that they will remove a certain amount of carbon at some undetermined point in the future, a company can report a greener carbon balance sheet without actually reducing its emissions.In any case, staking the industry's hopes on corporate munificence is a dicey prospect indeed."I have been raising the alarm for about a year and a half," Eli Mitchell-Larson at Carbon Gap, a UK carbon dioxide removal advocacy organisation, told New Scientist. "If we're just waiting for the waves of free philanthropic money from corporations to fill a hole on their sustainability report, we're not really going to solve the problem."More on climate change: Scientists Just Found Who's Causing Global WarmingShare This Article
    0 Comments ·0 Shares ·0 Reviews
  • BougeRV water heater review: hot showers to go

    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge ScoreThe GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concernsat BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over For the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inchesand weighs 13.2 pounds, making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one poundpropane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5mshower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inchesacross, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse.Photo by Thomas Ricker / The VergeThe 2500mAh / 12Vintegrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-poundcanister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsiusaccording to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25Cafter just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41Cin about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51Cbefore the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C. A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gaswhenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s portable water heater can function as an outdoor shower during the summer months or be moved insidewhen coupled with a portable shower curtain and basin, all for less than That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hotjets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAhbatteries and list for between and So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More:
    #bougerv #water #heater #review #hot
    BougeRV water heater review: hot showers to go
    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge ScoreThe GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concernsat BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over For the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inchesand weighs 13.2 pounds, making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one poundpropane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5mshower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inchesacross, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse.Photo by Thomas Ricker / The VergeThe 2500mAh / 12Vintegrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-poundcanister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsiusaccording to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25Cafter just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41Cin about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51Cbefore the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C. A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gaswhenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s portable water heater can function as an outdoor shower during the summer months or be moved insidewhen coupled with a portable shower curtain and basin, all for less than That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hotjets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAhbatteries and list for between and So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More: #bougerv #water #heater #review #hot
    BougeRV water heater review: hot showers to go
    www.theverge.com
    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The $310 “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge Score$310The GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concerns$310 at BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over $150. For $310, the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inches (40 x 17 x 31cm) and weighs 13.2 pounds (6.21kg), making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one pound (454g) propane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5m (8.2 feet) shower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inches (5.7cm) across, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse. (He rolled in dirt immediately after.) Photo by Thomas Ricker / The VergeThe 2500mAh / 12V (30Wh) integrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-pound (454g) canister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsius (55–61 degrees Fahrenheit) according to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25C (73–77F) after just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41C (93–105F) in about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51C (124F) before the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C (122F). A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gas (followed by a whoosh of flame) whenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about $4,000. I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s $310 portable water heater can function as an outdoor shower during the summer months or be moved inside (with ventilation) when coupled with a portable shower curtain and basin, all for less than $600. That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hot (or cold) jets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAh (48Wh) batteries and list for between $349 and $399. So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More:
    0 Comments ·0 Shares ·0 Reviews
  • Paris Agreement target won’t protect polar ice sheets, scientists warn

    not enough

    Paris Agreement target won’t protect polar ice sheets, scientists warn

    Calls for a more ambitious climate goal are rising as Earth hits several tipping points.

    Bob Berwyn, Inside Climate News



    May 21, 2025 11:35 am

    |

    21

    A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California.

    Credit:

    Mario Tama / Getty Images

    A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California.

    Credit:

    Mario Tama / Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, nonpartisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    Sea levels in some parts of the world could be rising by as much as 8 to 12 inches per decade within the lifetime of today’s youngest generations, outpacing the ability of many coastal communities to adapt, scientists warned in a new study published this week.
    The research by an international team of sea level and polar ice experts suggests that limiting warming to 2.7° Fahrenheitabove the pre-industrial temperature—the Paris Climate Agreement’s target—isn’t low enough to prevent a worst-case meltdown of Earth’s polar ice sheets.
    A better target for maintaining a safe climate, at least for the long term, might be closer to 1.8° Fahrenheit, said Durham University geographer and glacier expert Chris Stokes, a co-author of the new paper.
    “There have been a couple of quite high-profile papers recently, including a synthesis in Nature looking at safe planetary boundaries,” he said. “They made the argument that 1° Celsius is a better goal. And a couple of other papers have come out suggesting that we need a stricter temperature limit or a long-term goal. And I think the evidence is building towards that.”
    It’s not a new argument, he said, noting that climate research predating the first Intergovernmental Panel on Climate Change report in 1990 already highlighted the high risks of more than 1° C of warming.
    “Those studies were saying, ‘We’re warming. We really don’t want to go past 1°. We really don’t want to exceed 350 parts per million of carbon dioxide,’” he said. “Because we know what could happen looking at past warm periods and at simple calculations of ice sheet mass balance. And, you know, 30 years later, 40 years later, here we are seeing the problem.”
    Scientific calls for a more ambitious long-term climate goal are rising just as Earth’s average global temperature has breached the Paris Agreement target of 1.5° C of warming over the pre-industrial level nearly every consecutive month for the past two years. Atmospheric carbon dioxide has reached a concentration of 430 ppm, a 50 percent increase over pre-industrial levels.

    But missing those goals doesn’t diminish the importance of potentially revising the target, for which the Paris Agreement includes a review mechanism, Stokes said. Even if the global temperature overshoots the 1.5° mark, it’s important to know for the long term how much it would have to be lowered to return to a safe climate range.
    The new study focused on how melting polar ice masses drives sea level rise by combining evidence from past warm periods that were similar to the present, measurements of how much ice is being lost under the present level of warming, and projections of how much ice would be lost at different warming levels over the next few centuries.
    Sea level rise of several inches per decade would likely overwhelm adaptation efforts by many coastal communities in the US, said co-author Andrea Dutton, a geoscientist and sea level expert at the University of Wisconsin-Madison.
    “Coastal communities that are adapting to and preparing for future sea level rise are largely adapting to the amount of sea level rise that has already occurred,” she said. In a best-case scenario, she added, they are preparing for sea level rise at the current rate of a few millimeters per year, while the research suggests that rate will double within decades.
    The last time atmospheric carbon dioxide was at a concentration similar to now was in the mid-Pliocene warm period, just over 3 million years ago, when average global sea level rose 35 to 70 feet higher than today over the course of thousands of years.
    But the current rate of warming is far faster than any other time identified in the geological record. How the ice sheets will respond to warming at that speed is not clear, but nearly every new study in the past few decades has shown changes in the Arctic happening faster than expected.

    The United States’ ability to prepare for sea level rise is also profoundly threatened by the cuts to federal science agencies and staffing, Dutton said.
    The current cuts to science research, the retraction of funds already promised to communities through the Inflation Reduction Act of 2022, the abandonment of the congressionally mandated National Climate Assessment, and changes to federal rules on air pollution “collectively threaten our ability to project future sea level rise, to prepare our communities, and to mitigate climate change and stem the rate at which sea-level is rising,” she said via email.
    Many researchers are working closely with coastal communities, but as federal grants continue to get cut, these collaborations will founder, she added.
    “The ice sheets won’t care what different political parties ‘believe’ about climate change,” she said. “Like it or not, they are simply at the mercy of rising temperatures.”
    The mass of ice lost from the polar ice sheets has quadrupled since the 1990s, and they are currently losing around 370 billion metric tons of ice per year, said co-author Jonathan Bamber, a physicist at the University of Bristol who focuses on studying how Earth’s frozen regions interact with the rest of the climate system.
    “We switched on some new technology 30 years ago, and we discovered that the ice sheets are responding with a large amplitude and rather rapidly,” he said. The extent of the changes to the ice sheet are much greater than models had ever suggested they would be, he noted. “That was a bit of a shock for the whole community.”
    Most of the climate models of the past three decades projected only about half as much melting as has actually been observed during that time, he said. That suggests the “safe operating zone for humanity is about 350 ppm” of atmospheric carbon dioxide, corresponding to about 1° C of warming.

    “I think we’ve known for a long time that we’re interfering with the climate system in a very dangerous way,” he said. “And one of the points of our paper is to demonstrate that one part of the climate system, the ice sheets, are showing some very disturbing signals right now.”
    Some of the most vulnerable places are far from any melting ice sheets, including Belize City, home to about 65,000 people, where just 3 feet of sea level rise would swamp 500 square miles of land.
    In some low-lying tropical regions around the equator, sea level is rising three times as fast as the global average. That’s because the water is expanding as it warms, and as the ice sheets melt, their gravitational pull is reduced, allowing more water to flow away from the poles toward the equator.
    “At low latitudes, it goes up more than the average,” Bamber said. “It’s bad news for places like Bangladesh, India, Vietnam, and the Nile Delta.”
    Global policymakers need to be more aware of the effects of a 1.5° C temperature increase, Ambassador Carlos Fuller, long-time climate negotiator for Belize, said of the new study.
    Belize already moved its capital inland, but its largest city will be inundated at just 1 meter of sea-level rise, he said.
    “Findings such as these only sharpen the need to remain within the 1.5° Paris Agreement limit, or as close as possible, so we can return to lower temperatures and protect our coastal cities,” Fuller said.
    While the new study is focused on ice sheets, Durham University’s Stokes notes that recent research shows other parts of the Earth system are already at, or very near, tipping points that are irreversible on a timescale relevant to human civilizations. That includes changes to freshwater systems and ocean acidification.
    “I think somebody used the analogy that it’s like you’re wandering around in a dark room,” he said. “You know there’s a monster there, but you don’t know when you’re going to encounter it. It’s a little bit like that with these tipping points. We don’t know exactly where they are. We may have even crossed them, and we do know that we will hit them if we keep warming.”

    Bob Berwyn, Inside Climate News

    21 Comments
    #paris #agreement #target #wont #protect
    Paris Agreement target won’t protect polar ice sheets, scientists warn
    not enough Paris Agreement target won’t protect polar ice sheets, scientists warn Calls for a more ambitious climate goal are rising as Earth hits several tipping points. Bob Berwyn, Inside Climate News – May 21, 2025 11:35 am | 21 A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California. Credit: Mario Tama / Getty Images A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California. Credit: Mario Tama / Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, nonpartisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. Sea levels in some parts of the world could be rising by as much as 8 to 12 inches per decade within the lifetime of today’s youngest generations, outpacing the ability of many coastal communities to adapt, scientists warned in a new study published this week. The research by an international team of sea level and polar ice experts suggests that limiting warming to 2.7° Fahrenheitabove the pre-industrial temperature—the Paris Climate Agreement’s target—isn’t low enough to prevent a worst-case meltdown of Earth’s polar ice sheets. A better target for maintaining a safe climate, at least for the long term, might be closer to 1.8° Fahrenheit, said Durham University geographer and glacier expert Chris Stokes, a co-author of the new paper. “There have been a couple of quite high-profile papers recently, including a synthesis in Nature looking at safe planetary boundaries,” he said. “They made the argument that 1° Celsius is a better goal. And a couple of other papers have come out suggesting that we need a stricter temperature limit or a long-term goal. And I think the evidence is building towards that.” It’s not a new argument, he said, noting that climate research predating the first Intergovernmental Panel on Climate Change report in 1990 already highlighted the high risks of more than 1° C of warming. “Those studies were saying, ‘We’re warming. We really don’t want to go past 1°. We really don’t want to exceed 350 parts per million of carbon dioxide,’” he said. “Because we know what could happen looking at past warm periods and at simple calculations of ice sheet mass balance. And, you know, 30 years later, 40 years later, here we are seeing the problem.” Scientific calls for a more ambitious long-term climate goal are rising just as Earth’s average global temperature has breached the Paris Agreement target of 1.5° C of warming over the pre-industrial level nearly every consecutive month for the past two years. Atmospheric carbon dioxide has reached a concentration of 430 ppm, a 50 percent increase over pre-industrial levels. But missing those goals doesn’t diminish the importance of potentially revising the target, for which the Paris Agreement includes a review mechanism, Stokes said. Even if the global temperature overshoots the 1.5° mark, it’s important to know for the long term how much it would have to be lowered to return to a safe climate range. The new study focused on how melting polar ice masses drives sea level rise by combining evidence from past warm periods that were similar to the present, measurements of how much ice is being lost under the present level of warming, and projections of how much ice would be lost at different warming levels over the next few centuries. Sea level rise of several inches per decade would likely overwhelm adaptation efforts by many coastal communities in the US, said co-author Andrea Dutton, a geoscientist and sea level expert at the University of Wisconsin-Madison. “Coastal communities that are adapting to and preparing for future sea level rise are largely adapting to the amount of sea level rise that has already occurred,” she said. In a best-case scenario, she added, they are preparing for sea level rise at the current rate of a few millimeters per year, while the research suggests that rate will double within decades. The last time atmospheric carbon dioxide was at a concentration similar to now was in the mid-Pliocene warm period, just over 3 million years ago, when average global sea level rose 35 to 70 feet higher than today over the course of thousands of years. But the current rate of warming is far faster than any other time identified in the geological record. How the ice sheets will respond to warming at that speed is not clear, but nearly every new study in the past few decades has shown changes in the Arctic happening faster than expected. The United States’ ability to prepare for sea level rise is also profoundly threatened by the cuts to federal science agencies and staffing, Dutton said. The current cuts to science research, the retraction of funds already promised to communities through the Inflation Reduction Act of 2022, the abandonment of the congressionally mandated National Climate Assessment, and changes to federal rules on air pollution “collectively threaten our ability to project future sea level rise, to prepare our communities, and to mitigate climate change and stem the rate at which sea-level is rising,” she said via email. Many researchers are working closely with coastal communities, but as federal grants continue to get cut, these collaborations will founder, she added. “The ice sheets won’t care what different political parties ‘believe’ about climate change,” she said. “Like it or not, they are simply at the mercy of rising temperatures.” The mass of ice lost from the polar ice sheets has quadrupled since the 1990s, and they are currently losing around 370 billion metric tons of ice per year, said co-author Jonathan Bamber, a physicist at the University of Bristol who focuses on studying how Earth’s frozen regions interact with the rest of the climate system. “We switched on some new technology 30 years ago, and we discovered that the ice sheets are responding with a large amplitude and rather rapidly,” he said. The extent of the changes to the ice sheet are much greater than models had ever suggested they would be, he noted. “That was a bit of a shock for the whole community.” Most of the climate models of the past three decades projected only about half as much melting as has actually been observed during that time, he said. That suggests the “safe operating zone for humanity is about 350 ppm” of atmospheric carbon dioxide, corresponding to about 1° C of warming. “I think we’ve known for a long time that we’re interfering with the climate system in a very dangerous way,” he said. “And one of the points of our paper is to demonstrate that one part of the climate system, the ice sheets, are showing some very disturbing signals right now.” Some of the most vulnerable places are far from any melting ice sheets, including Belize City, home to about 65,000 people, where just 3 feet of sea level rise would swamp 500 square miles of land. In some low-lying tropical regions around the equator, sea level is rising three times as fast as the global average. That’s because the water is expanding as it warms, and as the ice sheets melt, their gravitational pull is reduced, allowing more water to flow away from the poles toward the equator. “At low latitudes, it goes up more than the average,” Bamber said. “It’s bad news for places like Bangladesh, India, Vietnam, and the Nile Delta.” Global policymakers need to be more aware of the effects of a 1.5° C temperature increase, Ambassador Carlos Fuller, long-time climate negotiator for Belize, said of the new study. Belize already moved its capital inland, but its largest city will be inundated at just 1 meter of sea-level rise, he said. “Findings such as these only sharpen the need to remain within the 1.5° Paris Agreement limit, or as close as possible, so we can return to lower temperatures and protect our coastal cities,” Fuller said. While the new study is focused on ice sheets, Durham University’s Stokes notes that recent research shows other parts of the Earth system are already at, or very near, tipping points that are irreversible on a timescale relevant to human civilizations. That includes changes to freshwater systems and ocean acidification. “I think somebody used the analogy that it’s like you’re wandering around in a dark room,” he said. “You know there’s a monster there, but you don’t know when you’re going to encounter it. It’s a little bit like that with these tipping points. We don’t know exactly where they are. We may have even crossed them, and we do know that we will hit them if we keep warming.” Bob Berwyn, Inside Climate News 21 Comments #paris #agreement #target #wont #protect
    Paris Agreement target won’t protect polar ice sheets, scientists warn
    arstechnica.com
    not enough Paris Agreement target won’t protect polar ice sheets, scientists warn Calls for a more ambitious climate goal are rising as Earth hits several tipping points. Bob Berwyn, Inside Climate News – May 21, 2025 11:35 am | 21 A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California. Credit: Mario Tama / Getty Images A slurry mix of sand and seawater is pumped via barge onto the main public beach during a sand replenishment project along eroding shoreline on November 21, 2024, in San Clemente, California. Credit: Mario Tama / Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, nonpartisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. Sea levels in some parts of the world could be rising by as much as 8 to 12 inches per decade within the lifetime of today’s youngest generations, outpacing the ability of many coastal communities to adapt, scientists warned in a new study published this week. The research by an international team of sea level and polar ice experts suggests that limiting warming to 2.7° Fahrenheit (1.5° Celsius) above the pre-industrial temperature—the Paris Climate Agreement’s target—isn’t low enough to prevent a worst-case meltdown of Earth’s polar ice sheets. A better target for maintaining a safe climate, at least for the long term, might be closer to 1.8° Fahrenheit, said Durham University geographer and glacier expert Chris Stokes, a co-author of the new paper. “There have been a couple of quite high-profile papers recently, including a synthesis in Nature looking at safe planetary boundaries,” he said. “They made the argument that 1° Celsius is a better goal. And a couple of other papers have come out suggesting that we need a stricter temperature limit or a long-term goal. And I think the evidence is building towards that.” It’s not a new argument, he said, noting that climate research predating the first Intergovernmental Panel on Climate Change report in 1990 already highlighted the high risks of more than 1° C of warming. “Those studies were saying, ‘We’re warming. We really don’t want to go past 1°. We really don’t want to exceed 350 parts per million of carbon dioxide,’” he said. “Because we know what could happen looking at past warm periods and at simple calculations of ice sheet mass balance. And, you know, 30 years later, 40 years later, here we are seeing the problem.” Scientific calls for a more ambitious long-term climate goal are rising just as Earth’s average global temperature has breached the Paris Agreement target of 1.5° C of warming over the pre-industrial level nearly every consecutive month for the past two years. Atmospheric carbon dioxide has reached a concentration of 430 ppm, a 50 percent increase over pre-industrial levels. But missing those goals doesn’t diminish the importance of potentially revising the target, for which the Paris Agreement includes a review mechanism, Stokes said. Even if the global temperature overshoots the 1.5° mark, it’s important to know for the long term how much it would have to be lowered to return to a safe climate range. The new study focused on how melting polar ice masses drives sea level rise by combining evidence from past warm periods that were similar to the present, measurements of how much ice is being lost under the present level of warming, and projections of how much ice would be lost at different warming levels over the next few centuries. Sea level rise of several inches per decade would likely overwhelm adaptation efforts by many coastal communities in the US, said co-author Andrea Dutton, a geoscientist and sea level expert at the University of Wisconsin-Madison. “Coastal communities that are adapting to and preparing for future sea level rise are largely adapting to the amount of sea level rise that has already occurred,” she said. In a best-case scenario, she added, they are preparing for sea level rise at the current rate of a few millimeters per year, while the research suggests that rate will double within decades. The last time atmospheric carbon dioxide was at a concentration similar to now was in the mid-Pliocene warm period, just over 3 million years ago, when average global sea level rose 35 to 70 feet higher than today over the course of thousands of years. But the current rate of warming is far faster than any other time identified in the geological record. How the ice sheets will respond to warming at that speed is not clear, but nearly every new study in the past few decades has shown changes in the Arctic happening faster than expected. The United States’ ability to prepare for sea level rise is also profoundly threatened by the cuts to federal science agencies and staffing, Dutton said. The current cuts to science research, the retraction of funds already promised to communities through the Inflation Reduction Act of 2022, the abandonment of the congressionally mandated National Climate Assessment, and changes to federal rules on air pollution “collectively threaten our ability to project future sea level rise, to prepare our communities, and to mitigate climate change and stem the rate at which sea-level is rising,” she said via email. Many researchers are working closely with coastal communities, but as federal grants continue to get cut, these collaborations will founder, she added. “The ice sheets won’t care what different political parties ‘believe’ about climate change,” she said. “Like it or not, they are simply at the mercy of rising temperatures.” The mass of ice lost from the polar ice sheets has quadrupled since the 1990s, and they are currently losing around 370 billion metric tons of ice per year, said co-author Jonathan Bamber, a physicist at the University of Bristol who focuses on studying how Earth’s frozen regions interact with the rest of the climate system. “We switched on some new technology 30 years ago, and we discovered that the ice sheets are responding with a large amplitude and rather rapidly,” he said. The extent of the changes to the ice sheet are much greater than models had ever suggested they would be, he noted. “That was a bit of a shock for the whole community.” Most of the climate models of the past three decades projected only about half as much melting as has actually been observed during that time, he said. That suggests the “safe operating zone for humanity is about 350 ppm” of atmospheric carbon dioxide, corresponding to about 1° C of warming. “I think we’ve known for a long time that we’re interfering with the climate system in a very dangerous way,” he said. “And one of the points of our paper is to demonstrate that one part of the climate system, the ice sheets, are showing some very disturbing signals right now.” Some of the most vulnerable places are far from any melting ice sheets, including Belize City, home to about 65,000 people, where just 3 feet of sea level rise would swamp 500 square miles of land. In some low-lying tropical regions around the equator, sea level is rising three times as fast as the global average. That’s because the water is expanding as it warms, and as the ice sheets melt, their gravitational pull is reduced, allowing more water to flow away from the poles toward the equator. “At low latitudes, it goes up more than the average,” Bamber said. “It’s bad news for places like Bangladesh, India, Vietnam, and the Nile Delta.” Global policymakers need to be more aware of the effects of a 1.5° C temperature increase, Ambassador Carlos Fuller, long-time climate negotiator for Belize, said of the new study. Belize already moved its capital inland, but its largest city will be inundated at just 1 meter of sea-level rise, he said. “Findings such as these only sharpen the need to remain within the 1.5° Paris Agreement limit, or as close as possible, so we can return to lower temperatures and protect our coastal cities,” Fuller said. While the new study is focused on ice sheets, Durham University’s Stokes notes that recent research shows other parts of the Earth system are already at, or very near, tipping points that are irreversible on a timescale relevant to human civilizations. That includes changes to freshwater systems and ocean acidification. “I think somebody used the analogy that it’s like you’re wandering around in a dark room,” he said. “You know there’s a monster there, but you don’t know when you’re going to encounter it. It’s a little bit like that with these tipping points. We don’t know exactly where they are. We may have even crossed them, and we do know that we will hit them if we keep warming.” Bob Berwyn, Inside Climate News 21 Comments
    0 Comments ·0 Shares ·0 Reviews
  • Asus continues power supply arms race with new 3,000W PSU that handles up to four RTX 5090s

    In a nutshell: Since January, several manufacturers have released power supply units rated at over 2,000 watts – enough to run an air conditioner and far beyond what standard US outlets can handle. Now, Asus has raised the bar with a 3,000-watt model that supports up to four flagship Nvidia GPUs.
    Asus has unveiled the Pro WS 3,000W platinum power supply, the company's highest-capacity model yet. Packed with advanced features, the unit is the latest in a wave of high-end components built for increasingly power-hungry AI workstations.
    Thermaltake kicked off this year's PSU arms race in January with the D2000, a 2,000W unit aimed primarily at the European market. Super Flower raised the bar in March with a 2,800W model featuring four 12V-2x6 connectors and a price tag. Then, in late April, SilverStone launched the 2,500W 2500Rz, capable of powering four RTX 5080s or three 5090s.

    Asus has joined the high-wattage fray with its 3,000W juggernaut, capable of powering an RTX 5090 on its four 12V-2x6 connectors. No one builds a rig like that for gaming – users build them to handle rendering or AI workloads. The "Asus Pro Workstation" label on the PSU's rear panel highlights the growing prominence of these use cases in the consumer GPU market.
    Like other recent high-power PSUs, the ATX 3.1-compliant Pro WS 3000 supports PCIe 5.1 connections. Each bundled cable delivers up to 600W. Its 80 Plus Platinum rating means it runs at 89 percent efficiency at full load and 92 percent efficiency at half load – one step below the Thermaltake model's titanium certification.

    Asus included dual-ball bearing fans for cooling, which the company says can last up to 80,000 hours – longer than fluid dynamic bearings and up to twice as long as sleeve bearings. However, the PSU has yet to receive a noise level certification, so the fan noise level remains unclear. Extended aluminum heatsinks improve heat dissipation, while gold-plated copper pins can reduce 12V-2x6 connector temperatures by up to 10 degrees Celsius.
    The Asus Pro WS 3000 packs its astounding power capacity within an impressively compact 175 x 150 x 86mm shell. Pricing on the company's global storefront is unclear, but the PSU isn't available on the company's US site because American sockets don't support such a high power draw.
    #asus #continues #power #supply #arms
    Asus continues power supply arms race with new 3,000W PSU that handles up to four RTX 5090s
    In a nutshell: Since January, several manufacturers have released power supply units rated at over 2,000 watts – enough to run an air conditioner and far beyond what standard US outlets can handle. Now, Asus has raised the bar with a 3,000-watt model that supports up to four flagship Nvidia GPUs. Asus has unveiled the Pro WS 3,000W platinum power supply, the company's highest-capacity model yet. Packed with advanced features, the unit is the latest in a wave of high-end components built for increasingly power-hungry AI workstations. Thermaltake kicked off this year's PSU arms race in January with the D2000, a 2,000W unit aimed primarily at the European market. Super Flower raised the bar in March with a 2,800W model featuring four 12V-2x6 connectors and a price tag. Then, in late April, SilverStone launched the 2,500W 2500Rz, capable of powering four RTX 5080s or three 5090s. Asus has joined the high-wattage fray with its 3,000W juggernaut, capable of powering an RTX 5090 on its four 12V-2x6 connectors. No one builds a rig like that for gaming – users build them to handle rendering or AI workloads. The "Asus Pro Workstation" label on the PSU's rear panel highlights the growing prominence of these use cases in the consumer GPU market. Like other recent high-power PSUs, the ATX 3.1-compliant Pro WS 3000 supports PCIe 5.1 connections. Each bundled cable delivers up to 600W. Its 80 Plus Platinum rating means it runs at 89 percent efficiency at full load and 92 percent efficiency at half load – one step below the Thermaltake model's titanium certification. Asus included dual-ball bearing fans for cooling, which the company says can last up to 80,000 hours – longer than fluid dynamic bearings and up to twice as long as sleeve bearings. However, the PSU has yet to receive a noise level certification, so the fan noise level remains unclear. Extended aluminum heatsinks improve heat dissipation, while gold-plated copper pins can reduce 12V-2x6 connector temperatures by up to 10 degrees Celsius. The Asus Pro WS 3000 packs its astounding power capacity within an impressively compact 175 x 150 x 86mm shell. Pricing on the company's global storefront is unclear, but the PSU isn't available on the company's US site because American sockets don't support such a high power draw. #asus #continues #power #supply #arms
    Asus continues power supply arms race with new 3,000W PSU that handles up to four RTX 5090s
    www.techspot.com
    In a nutshell: Since January, several manufacturers have released power supply units rated at over 2,000 watts – enough to run an air conditioner and far beyond what standard US outlets can handle. Now, Asus has raised the bar with a 3,000-watt model that supports up to four flagship Nvidia GPUs. Asus has unveiled the Pro WS 3,000W platinum power supply, the company's highest-capacity model yet. Packed with advanced features, the unit is the latest in a wave of high-end components built for increasingly power-hungry AI workstations. Thermaltake kicked off this year's PSU arms race in January with the D2000, a 2,000W unit aimed primarily at the European market. Super Flower raised the bar in March with a 2,800W model featuring four 12V-2x6 connectors and a $900 price tag. Then, in late April, SilverStone launched the 2,500W 2500Rz, capable of powering four RTX 5080s or three 5090s. Asus has joined the high-wattage fray with its 3,000W juggernaut, capable of powering an RTX 5090 on its four 12V-2x6 connectors. No one builds a rig like that for gaming – users build them to handle rendering or AI workloads. The "Asus Pro Workstation" label on the PSU's rear panel highlights the growing prominence of these use cases in the consumer GPU market. Like other recent high-power PSUs, the ATX 3.1-compliant Pro WS 3000 supports PCIe 5.1 connections. Each bundled cable delivers up to 600W. Its 80 Plus Platinum rating means it runs at 89 percent efficiency at full load and 92 percent efficiency at half load – one step below the Thermaltake model's titanium certification. Asus included dual-ball bearing fans for cooling, which the company says can last up to 80,000 hours – longer than fluid dynamic bearings and up to twice as long as sleeve bearings. However, the PSU has yet to receive a noise level certification, so the fan noise level remains unclear. Extended aluminum heatsinks improve heat dissipation, while gold-plated copper pins can reduce 12V-2x6 connector temperatures by up to 10 degrees Celsius. The Asus Pro WS 3000 packs its astounding power capacity within an impressively compact 175 x 150 x 86mm shell. Pricing on the company's global storefront is unclear, but the PSU isn't available on the company's US site because American sockets don't support such a high power draw.
    0 Comments ·0 Shares ·0 Reviews
  • Optimizing Multi-Objective Problems with Desirability Functions

    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution.

    While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization.

    In this article, we’ll explore:

    The mathematical foundation of desirability functions

    How to implement these functions in Python

    How to optimize a multi-objective problem with desirability functions

    Visualization for interpretation and explanation of the results

    To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices.

    By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub.

    What are Desirability Functions?

    Desirability functions were first formalized by Harringtonand later extended by Derringer and Suich. The idea is to:

    Transform each response into a performance score between 0and 1Combine all scores into a single metric to maximize

    Let’s explore the types of desirability functions and then how we can combine all the scores.

    The different types of desirability functions

    There are three different desirability functions, that would allow to handle many situations.

    Smaller-is-better: Used when minimizing a response is desirable

    def desirability_smaller_is_better-> float:
    """Calculate desirability function value where smaller values are better.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x <= x_min:
    return 1.0
    elif x >= x_max:
    return 0.0
    else:
    return/Larger-is-better: Used when maximizing a response is desirable

    def desirability_larger_is_better-> float:
    """Calculate desirability function value where larger values are better.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x <= x_min:
    return 0.0
    elif x >= x_max:
    return 1.0
    else:
    return/Target-is-best: Used when a specific target value is optimal

    def desirability_target_is_best-> float:
    """Calculate two-sided desirability function value with target value.

    Args:
    x: Input parameter value
    x_min: Minimum acceptable value
    x_target: Targetvalue
    x_max: Maximum acceptable value

    Returns:
    Desirability score between 0 and 1
    """
    if x_min <= x <= x_target:
    return/elif x_target < x <= x_max:
    return/else:
    return 0.0

    Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score.

    Combining Desirability Scores

    Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean:

    Where di are individual desirability values and wi are weights reflecting the relative importance of each metric.

    The geometric mean has an important property: if any single desirability is 0, the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent.

    def overall_desirability:
    """Compute overall desirability using geometric mean

    Parameters:
    -----------
    desirabilities : list
    Individual desirability scores
    weights : list
    Weights for each desirability

    Returns:
    --------
    float
    Overall desirability score
    """
    if weights is None:
    weights =* len# Convert to numpy arrays
    d = np.arrayw = np.array# Calculate geometric mean
    return np.prod**)

    The weights are hyperparameters that give leverage on the final outcome and give room for customization.

    A Practical Optimization Example: Bread Baking

    To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem.

    The Parameters and Quality Metrics

    Let’s play with the following parameters:

    Fermentation TimeFermentation TemperatureHydration LevelKneading TimeBaking TemperatureAnd let’s try to optimize these metrics:

    Texture Quality: The texture of the bread

    Flavor Profile: The flavor of the bread

    Practicality: The practicality of the whole process

    Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics. 

    For each quality metric, we need to define how parameters influence it:

    def compute_flavor_profile-> float:
    """Compute flavor profile score based on input parameters.

    Args:
    params: List of parameter valuesReturns:
    Weighted flavor profile score between 0 and 1
    """
    # Flavor mainly affected by fermentation parameters
    fermentation_d = desirability_larger_is_betterferment_temp_d = desirability_target_is_besthydration_d = desirability_target_is_best# Baking temperature has minimal effect on flavor
    weights =return np.averageHere for example, the flavor is influenced by the following:

    The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes

    The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius

    The hydration, with a maximum desirability peaking at 75% humidity

    These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality.

    The Objective Function

    Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously:

    def objective_function-> float:
    """Compute overall desirability score based on individual quality metrics.

    Args:
    params: List of parameter values
    weights: Weights for texture, flavor and practicality scores

    Returns:
    Negative overall desirability score"""
    # Compute individual desirability scores
    texture = compute_texture_qualityflavor = compute_flavor_profilepracticality = compute_practicality# Ensure weights sum up to one
    weights = np.array/ np.sum# Calculate overall desirability using geometric mean
    overall_d = overall_desirability# Return negative value since we want to maximize desirability
    # but optimization functions typically minimize
    return -overall_d

    After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized.

    Optimization with SciPy

    We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability:

    def optimize-> list:
    # Define parameter bounds
    bounds = {
    'fermentation_time':,
    'fermentation_temp':,
    'hydration_level':,
    'kneading_time':,
    'baking_temp':}

    # Initial guessx0 =# Run optimization
    result = minimize,
    bounds=list),
    method='SLSQP'
    )

    return result.x

    In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned. 

    The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture.

    Let’s now visualize the results for a few sets of weights.

    Visualization of Results

    Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights.

    Let’s have a look at the results in case of weights favoring practicality:

    Optimized parameters with weights favoring practicality. Image by author.

    With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality.

    Now, if we optimize with an emphasis on texture, we have slightly different results:

    Optimized parameters with weights favoring texture. Image by author.

    In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality. 

    Conclusion: Practical Applications of Desirability Functions

    While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization.

    Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives.

    The key advantages of this approach include:

    Standardized scales that make different metrics comparable and easy to combine into a single target

    Flexibility to handle different types of objectives: minimize, maximize, target

    Clear communication of preferences through mathematical functions

    The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives.
    The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science.
    #optimizing #multiobjective #problems #with #desirability
    Optimizing Multi-Objective Problems with Desirability Functions
    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution. While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization. In this article, we’ll explore: The mathematical foundation of desirability functions How to implement these functions in Python How to optimize a multi-objective problem with desirability functions Visualization for interpretation and explanation of the results To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices. By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub. What are Desirability Functions? Desirability functions were first formalized by Harringtonand later extended by Derringer and Suich. The idea is to: Transform each response into a performance score between 0and 1Combine all scores into a single metric to maximize Let’s explore the types of desirability functions and then how we can combine all the scores. The different types of desirability functions There are three different desirability functions, that would allow to handle many situations. Smaller-is-better: Used when minimizing a response is desirable def desirability_smaller_is_better-> float: """Calculate desirability function value where smaller values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 1.0 elif x >= x_max: return 0.0 else: return/Larger-is-better: Used when maximizing a response is desirable def desirability_larger_is_better-> float: """Calculate desirability function value where larger values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 0.0 elif x >= x_max: return 1.0 else: return/Target-is-best: Used when a specific target value is optimal def desirability_target_is_best-> float: """Calculate two-sided desirability function value with target value. Args: x: Input parameter value x_min: Minimum acceptable value x_target: Targetvalue x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x_min <= x <= x_target: return/elif x_target < x <= x_max: return/else: return 0.0 Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score. Combining Desirability Scores Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean: Where di are individual desirability values and wi are weights reflecting the relative importance of each metric. The geometric mean has an important property: if any single desirability is 0, the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent. def overall_desirability: """Compute overall desirability using geometric mean Parameters: ----------- desirabilities : list Individual desirability scores weights : list Weights for each desirability Returns: -------- float Overall desirability score """ if weights is None: weights =* len# Convert to numpy arrays d = np.arrayw = np.array# Calculate geometric mean return np.prod**) The weights are hyperparameters that give leverage on the final outcome and give room for customization. A Practical Optimization Example: Bread Baking To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem. The Parameters and Quality Metrics Let’s play with the following parameters: Fermentation TimeFermentation TemperatureHydration LevelKneading TimeBaking TemperatureAnd let’s try to optimize these metrics: Texture Quality: The texture of the bread Flavor Profile: The flavor of the bread Practicality: The practicality of the whole process Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics.  For each quality metric, we need to define how parameters influence it: def compute_flavor_profile-> float: """Compute flavor profile score based on input parameters. Args: params: List of parameter valuesReturns: Weighted flavor profile score between 0 and 1 """ # Flavor mainly affected by fermentation parameters fermentation_d = desirability_larger_is_betterferment_temp_d = desirability_target_is_besthydration_d = desirability_target_is_best# Baking temperature has minimal effect on flavor weights =return np.averageHere for example, the flavor is influenced by the following: The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius The hydration, with a maximum desirability peaking at 75% humidity These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality. The Objective Function Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously: def objective_function-> float: """Compute overall desirability score based on individual quality metrics. Args: params: List of parameter values weights: Weights for texture, flavor and practicality scores Returns: Negative overall desirability score""" # Compute individual desirability scores texture = compute_texture_qualityflavor = compute_flavor_profilepracticality = compute_practicality# Ensure weights sum up to one weights = np.array/ np.sum# Calculate overall desirability using geometric mean overall_d = overall_desirability# Return negative value since we want to maximize desirability # but optimization functions typically minimize return -overall_d After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized. Optimization with SciPy We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability: def optimize-> list: # Define parameter bounds bounds = { 'fermentation_time':, 'fermentation_temp':, 'hydration_level':, 'kneading_time':, 'baking_temp':} # Initial guessx0 =# Run optimization result = minimize, bounds=list), method='SLSQP' ) return result.x In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned.  The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture. Let’s now visualize the results for a few sets of weights. Visualization of Results Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights. Let’s have a look at the results in case of weights favoring practicality: Optimized parameters with weights favoring practicality. Image by author. With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality. Now, if we optimize with an emphasis on texture, we have slightly different results: Optimized parameters with weights favoring texture. Image by author. In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality.  Conclusion: Practical Applications of Desirability Functions While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization. Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives. The key advantages of this approach include: Standardized scales that make different metrics comparable and easy to combine into a single target Flexibility to handle different types of objectives: minimize, maximize, target Clear communication of preferences through mathematical functions The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives. The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science. #optimizing #multiobjective #problems #with #desirability
    Optimizing Multi-Objective Problems with Desirability Functions
    towardsdatascience.com
    When working in Data Science, it is not uncommon to encounter problems with competing objectives. Whether designing products, tuning algorithms or optimizing portfolios, we often need to balance several metrics to get the best possible outcome. Sometimes, maximizing one metrics comes at the expense of another, making it hard to have an overall optimized solution. While several solutions exist to solve multi-objective Optimization problems, I found desirability function to be both elegant and easy to explain to non-technical audience. Which makes them an interesting option to consider. Desirability functions will combine several metrics into a standardized score, allowing for a holistic optimization. In this article, we’ll explore: The mathematical foundation of desirability functions How to implement these functions in Python How to optimize a multi-objective problem with desirability functions Visualization for interpretation and explanation of the results To ground these concepts in a real example, we’ll apply desirability functions to optimize a bread baking: a toy problem with a few, interconnected parameters and competing quality objectives that will allow us to explore several optimization choices. By the end of this article, you’ll have a powerful new tool in your data science toolkit for tackling multi-objective optimization problems across numerous domains, as well as a fully functional code available here on GitHub. What are Desirability Functions? Desirability functions were first formalized by Harrington (1965) and later extended by Derringer and Suich (1980). The idea is to: Transform each response into a performance score between 0 (absolutely unacceptable) and 1 (the ideal value) Combine all scores into a single metric to maximize Let’s explore the types of desirability functions and then how we can combine all the scores. The different types of desirability functions There are three different desirability functions, that would allow to handle many situations. Smaller-is-better: Used when minimizing a response is desirable def desirability_smaller_is_better(x: float, x_min: float, x_max: float) -> float: """Calculate desirability function value where smaller values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 1.0 elif x >= x_max: return 0.0 else: return (x_max - x) / (x_max - x_min) Larger-is-better: Used when maximizing a response is desirable def desirability_larger_is_better(x: float, x_min: float, x_max: float) -> float: """Calculate desirability function value where larger values are better. Args: x: Input parameter value x_min: Minimum acceptable value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x <= x_min: return 0.0 elif x >= x_max: return 1.0 else: return (x - x_min) / (x_max - x_min) Target-is-best: Used when a specific target value is optimal def desirability_target_is_best(x: float, x_min: float, x_target: float, x_max: float) -> float: """Calculate two-sided desirability function value with target value. Args: x: Input parameter value x_min: Minimum acceptable value x_target: Target (optimal) value x_max: Maximum acceptable value Returns: Desirability score between 0 and 1 """ if x_min <= x <= x_target: return (x - x_min) / (x_target - x_min) elif x_target < x <= x_max: return (x_max - x) / (x_max - x_target) else: return 0.0 Every input parameter can be parameterized with one of these three desirability functions, before combining them into a single desirability score. Combining Desirability Scores Once individual metrics are transformed into desirability scores, they need to be combined into an overall desirability. The most common approach is the geometric mean: Where di are individual desirability values and wi are weights reflecting the relative importance of each metric. The geometric mean has an important property: if any single desirability is 0 (i.e. completely unacceptable), the overall desirability is also 0, regardless of other values. This enforces that all requirements must be met to some extent. def overall_desirability(desirabilities, weights=None): """Compute overall desirability using geometric mean Parameters: ----------- desirabilities : list Individual desirability scores weights : list Weights for each desirability Returns: -------- float Overall desirability score """ if weights is None: weights = [1] * len(desirabilities) # Convert to numpy arrays d = np.array(desirabilities) w = np.array(weights) # Calculate geometric mean return np.prod(d ** w) ** (1 / np.sum(w)) The weights are hyperparameters that give leverage on the final outcome and give room for customization. A Practical Optimization Example: Bread Baking To demonstrate desirability functions in action, let’s apply them to a toy problem: a bread baking optimization problem. The Parameters and Quality Metrics Let’s play with the following parameters: Fermentation Time (30–180 minutes) Fermentation Temperature (20–30°C) Hydration Level (60–85%) Kneading Time (0–20 minutes) Baking Temperature (180–250°C) And let’s try to optimize these metrics: Texture Quality: The texture of the bread Flavor Profile: The flavor of the bread Practicality: The practicality of the whole process Of course, each of these metrics depends on more than one parameter. So here comes one of the most critical steps: mapping parameters to quality metrics.  For each quality metric, we need to define how parameters influence it: def compute_flavor_profile(params: List[float]) -> float: """Compute flavor profile score based on input parameters. Args: params: List of parameter values [fermentation_time, ferment_temp, hydration, kneading_time, baking_temp] Returns: Weighted flavor profile score between 0 and 1 """ # Flavor mainly affected by fermentation parameters fermentation_d = desirability_larger_is_better(params[0], 30, 180) ferment_temp_d = desirability_target_is_best(params[1], 20, 24, 28) hydration_d = desirability_target_is_best(params[2], 65, 75, 85) # Baking temperature has minimal effect on flavor weights = [0.5, 0.3, 0.2] return np.average([fermentation_d, ferment_temp_d, hydration_d], weights=weights) Here for example, the flavor is influenced by the following: The fermentation time, with a minimum desirability below 30 minutes and a maximum desirability above 180 minutes The fermentation temperature, with a maximum desirability peaking at 24 degrees Celsius The hydration, with a maximum desirability peaking at 75% humidity These computed parameters are then weighted averaged to return the flavor desirability. Similar computations and made for the texture quality and practicality. The Objective Function Following the desirability function approach, we’ll use the overall desirability as our objective function. The goal is to maximize this overall score, which means finding parameters that best satisfy all our three requirements simultaneously: def objective_function(params: List[float], weights: List[float]) -> float: """Compute overall desirability score based on individual quality metrics. Args: params: List of parameter values weights: Weights for texture, flavor and practicality scores Returns: Negative overall desirability score (for minimization) """ # Compute individual desirability scores texture = compute_texture_quality(params) flavor = compute_flavor_profile(params) practicality = compute_practicality(params) # Ensure weights sum up to one weights = np.array(weights) / np.sum(weights) # Calculate overall desirability using geometric mean overall_d = overall_desirability([texture, flavor, practicality], weights) # Return negative value since we want to maximize desirability # but optimization functions typically minimize return -overall_d After computing the individual desirabilities for texture, flavor and practicality; the overall desirability is simply computed with a weighted geometric mean. It finally returns the negative overall desirability, so that it can be minimized. Optimization with SciPy We finally use SciPy’s minimize function to find optimal parameters. Since we returned the negative overall desirability as the objective function, minimizing it would maximize the overall desirability: def optimize(weights: list[float]) -> list[float]: # Define parameter bounds bounds = { 'fermentation_time': (1, 24), 'fermentation_temp': (20, 30), 'hydration_level': (60, 85), 'kneading_time': (0, 20), 'baking_temp': (180, 250) } # Initial guess (middle of bounds) x0 = [(b[0] + b[1]) / 2 for b in bounds.values()] # Run optimization result = minimize( objective_function, x0, args=(weights,), bounds=list(bounds.values()), method='SLSQP' ) return result.x In this function, after defining the bounds for each parameter, the initial guess is computed as the middle of bounds, and then given as input to the minimize function of SciPy. The result is finally returned.  The weights are given as input to the optimizer too, and are a good way to customize the output. For example, with a larger weight on practicality, the optimized solution will focus on practicality over flavor and texture. Let’s now visualize the results for a few sets of weights. Visualization of Results Let’s see how the optimizer handles different preference profiles, demonstrating the flexibility of desirability functions, given various input weights. Let’s have a look at the results in case of weights favoring practicality: Optimized parameters with weights favoring practicality. Image by author. With weights largely in favor of practicality, the achieved overall desirability is 0.69, with a short kneading time of 5 minutes, since a high value impacts negatively the practicality. Now, if we optimize with an emphasis on texture, we have slightly different results: Optimized parameters with weights favoring texture. Image by author. In this case, the achieved overall desirability is 0.85, significantly higher. The kneading time is this time 12 minutes, as a higher value impacts positively the texture and is not penalized so much because of practicality.  Conclusion: Practical Applications of Desirability Functions While we focused on bread baking as our example, the same approach can be applied to various domains, such as product formulation in cosmetics or resource allocation in portfolio optimization. Desirability functions provide a powerful mathematical framework for tackling multi-objective optimization problems across numerous data science applications. By transforming raw metrics into standardized desirability scores, we can effectively combine and optimize disparate objectives. The key advantages of this approach include: Standardized scales that make different metrics comparable and easy to combine into a single target Flexibility to handle different types of objectives: minimize, maximize, target Clear communication of preferences through mathematical functions The code presented here provides a starting point for your own experimentation. Whether you’re optimizing industrial processes, machine learning models, or product formulations, hopefully desirability functions offer a systematic approach to finding the best compromise among competing objectives. The post Optimizing Multi-Objective Problems with Desirability Functions appeared first on Towards Data Science.
    0 Comments ·0 Shares ·0 Reviews
  • How Dell’s AI Infrastructure Updates Deliver Choice, Control And Scale

    Dell Technologies World focuses its keynote on Inventing the Future with AIDell Technologies
    Dell Technologies unveiled a significant expansion of its Dell AI Factory platform at its annual Dell Technologies World conference today, announcing over 40 product enhancements designed to help enterprises deploy artificial intelligence workloads more efficiently across both on-premises environments and cloud systems.

    The Dell AI Factory is not a physical manufacturing facility but a comprehensive framework combining advanced infrastructure, validated solutions, services, and an open ecosystem to help businesses harness the full potential of artificial intelligence across diverse environments—from data centers and cloud to edge locations and AI PCs.

    The company has attracted over 3,000 AI Factory customers since launching the platform last year. In an earlier call with industry analysts, Dell shared research stating that 79% of production AI workloads are running outside of public cloud environments—a trend driven by cost, security, and data governance concerns. During the keynote, Michael Dell provided more color on the value of Dell’s AI factory concept. He said, "The Dell AI factory is up to 60% more cost effective than the public cloud, and recent studies indicate that about three-fourths of AI initiatives are meeting or exceeding expectations. That means organizations are driving ROI and productivity gains from 20% to 40% in some cases.

    Making AI Easier to Deploy
    Organizations need the freedom to run AI workloads wherever makes the most sense for their business, without sacrificing performance or control. While IT leaders embraced the public cloud for their initial AI services, many organizations are now looking for a more nuanced approach where the company can control over their most critical AI assets while maintaining the flexibility to use cloud resources when appropriate. Over 80 percent of the companies Lopez Research interviewed said they struggled to find the budget and technical talent to deploy AI. These AI deployment challenges have only increased as more AI models and AI infrastructure services were launched.
    Silicon Diversity and Customer Choice
    A central theme of Dell's AI Factory message is how Dell makes AI easier to deploy while delivering choice. Dell is offering customers choice through silicon diversity in its designs, but also with ISV models. The company announced it has added Intel to its AI Factory portfolio with Intel Gaudi 3 AI accelerators and Intel Xeon processors, with a strong focus on inferencing workloads.
    Dell also announced its fourth update to the Dell AI Platform with AMD, rolling out two new PowerEdge servers—the XE9785 and the XE9785L—equipped with the latest AMD Instinct MI350 series GPUs. The Dell AI Factory with NVIDIA combines Dell's infrastructure with NVIDIA's AI software and GPU technologies to deliver end-to-end solutions that can reduce setup time by up to 86% compared to traditional approaches. The company also continues to strengthen its partnership with NVIDIA, announcing products that leverage NVIDIA's Blackwell family and other updates launched at NVIDIA GTC. As of today, Dell supports choice by delivering AI solutions with all of the primary GPU and AI accelerator infrastructure providers.
    Client-Side AI Advancements
    At the edge of the AI Factory ecosystem, Dell announced enhancements to the Dell Pro Max in a mobile form factor, leveraging Qualcomm’s AI 100 discrete NPUs designed for AI engineers and data scientists who need fast inferencing capabilities. With up to 288 TOPs at 16-bit floating point precision, these devices can power up to a 70-billion parameter model, delivering 7x the inferencing speed and 4x the accuracy over a 40 TOPs NPU. According to Dell, the Pro Max Plus line can run a 109-billion parameter AI model.
    The Dell Pro Max Plus targets AI engineers and data scientist with with the ability to run a ... More 109-billion parameter modelDell Technologies
    The Pro Max and Plus launches follow Dell's previous announcement of AI PCs featuring Dell Pro Max with GB 10 and GB 300 processors powered by NVIDIA's Grace Blackwell architecture. Overall, Dell has simplified its PC portfolio but made it easier for customers to choose the right system for their workloads by providing the latest chips from AMD, Intel, Nvidia, and Qualcomm.
    On-Premise AI Deployment Gains Ecosystem Momentum
    Following the theme of choice, organizations need the flexibility to run AI workloads on-premises and in the cloud. Dell is making significant strides in enabling on-premise AI deployments with major software partners. The company announced it is the first provider to bring Cohere capabilities on-premises, combining Cohere's generative AI models with Dell's secure, scalable infrastructure for turnkey enterprise solutions.
    Similar partnerships with Mistral and Glean were also announced, with Dell facilitating their first on-premise deployments. Additionally, Dell is supporting Google's Gemini on-premises with Google Distributed Cloud.
    To simplify model deployment, Dell now offers customers the ability to choose models on Hugging Face and deploy them in an automated fashion using containers and scripts. Enterprises increasingly recognize that while public cloud AI has its place, a hybrid AI infrastructure approach could deliver better economics and security for production workloads.
    The imperative for scalable yet efficient AI infrastructure at the edge is a growing need. As Michael Dell said during his Dell Technologies World keynote, “Over 75% of enterprise data will soon be created and processed at the edge, and AI will follow that data; it's not the other way around. The future of AI will be decentralized, low latency, and hyper-efficient.”
    Dell's ability to offer robust hybrid and fully on-premises solutions for AI is proving to be a significant advantage as companies increasingly seek on-premises support and even potentially air-gapped solutions for their most sensitive AI workloads. Key industries adopting the Dell AI Factory include finance, retail, energy, and healthcare providers.
    Scaling AI Requires a Focus on Energy Efficiency
    Simplifying AI also requires product innovations that deliver cost-effective, energy-efficient technology. As AI workloads drive unprecedented power consumption, Dell has prioritized energy efficiency in its latest offerings. The company introduced the Dell PowerCool Enclosed Rear Door Heat Exchangerwith Dell Integrated Rack Controller. This new cooling solution captures nearly 100% of the heat coming from GPU-intensive workloads. This innovation lowers cooling energy requirements for a rack by 60%, allowing customers to deploy 16% more racks with the same power infrastructure.
    Dell's new systems are rated to operate at 32 to 37 degrees Celsius, supporting significantly warmer temperatures than traditional air-cooled or water-chilled systems, further reducing power consumption for cooling. The PowerEdge XE9785L now offers Dell liquid cooling for flexible power management. Even if a company isn't aiming for a specific sustainability goal, every organization wants to improve energy utilization.
    Early Adopter Use Cases Highlight AI's Opportunity
    With over 200 product enhancements to its AI Factory in just one year, Dell Technologies is positioning itself as a central player in the rapidly evolving enterprise AI infrastructure market. It offers the breadth of solutions and expertise organizations require to successfully implement production-grade AI systems in a secure and scalable fashion. However, none of this technology matters if enterprises can't find a way to create business value by adopting it. Fortunately, examples from the first wave of enterprise early adopters highlight ways AI can deliver meaningful returns in productivity and customer experience. Let's look at two use cases presented at Dell Tech World.Michael Dell, CEO of Dell Technologies, interviews Larry Feinsmith, the Managing Director and Head ... More of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase at Dell Technologies WorldDell Technologies
    The Power of LLMs in Finance at JPMorgan Chase
    JPMorgan Chase took the stage to make AI real from a customer’s perspective. The financial firm uses Dell's compute hardware, software-defined storage, client, and peripheral solutions. Larry Feinsmith, the Managing Director and Head of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase, said, "We have a hybrid, multi-cloud, multi-provider strategy. Our private cloud is an incredibly strategic asset for us. We still have many applications and data on-premises for resiliency, latency, and a variety of other benefits."
    Feinsmith also spoke of the company's Large Language Modelstrategy. He said, “Our strategy is to use a constellation of models, both foundational and open, which requires a tremendous amount of compute in our data centers, in the public cloud, and, of course, at the edge. The one constant thing, whether you're training models, fine-tuning models, or finding a great use case that has large-scale inferencing, is that they all will drive compute. We think Dell is incredibly well positioned to help JPMorgan Chase and other companies in their AI journey.”
    Feinsmith noted that using AI isn't new for JPMorgan Chase. For over a decade, JPMorgan Chase has leveraged various types of AI, such as machine learning models for fraud detection, personalization, and marketing operations. The company uses what Feinsmith called its LLM suite, which over 200,000 people at JPMorgan Chase use today. The generative AI application is used for QA summarization and content generation using JPMorgan Chase's data in a highly secure way. Next, it has used the LLM suite architecture to build applications for its financial advisors, contact center agents, and any employee interacting with its clients. Its third use case highlighted changes in the software development area. JPMorgan Chase rolled out code generation AI capabilities to over 40,000 engineers. It has achieved as much as 20% productivity in the code creation and expects to leverage AI throughout the software development life cycle. Going forward, the financial firm expects to use AI agents and reasoning models to execute complex business processes end-to-end.Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's shares the retailers AI ... More strategy at Dell Technologies WorldDell Technologies.
    How AI Makes It Easier For Employees to Serve Customers at Lowe's
    Lowe's Home Improvement Stores provided another example of how companies are leveraging Dell Technology and AI to transform the customer and employee experience. Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's, shared insights on designing the strategy for AI when she said, "How should we deploy AI? We wanted to do impactful and meaningful things. We did not want to die a death of 1000 pilots, and we organized our efforts across how we sell, how we shop, and how we work. How we sell was for our associates. How we shop is for our customers, and how we work is for our headquarters employees. For whatever reason, most companies have begun with their workforce in the headquarters. We said, No, we are going to put AI in the hands of 300,000 associates."
    For example, she described a generative AI companion app for store associates. "Every store associate now has on his or her zebra device a ChatGPT-like experience for home improvement.", said Godbole. Lowe's is also deploying computer vision algorithms at the edge to understand issues such as whether a customer in a particular aisle is waiting for help. The system will then send notifications to the associates in that department. Customers can also ask various home improvement questions, such as what paint finish to use in a bathroom, at Lowes.com/AI.
    Designing A World Where AI Infrastructure Delivers Human Opportunity
    Michael Dell said, "We are entering the age of ubiquitous intelligence, where AI becomes as essential as electricity, with AI, you can distill years of experience into instant insights, speeding up decisions and uncovering patterns in massive data. But it’s not here to replace humans. AI is a collaborator that frees your teams to do what they do best, to innovate, to imagine, and to solve the world's toughest problems."
    While there are many AI deployment challenges ahead, the customer examples shared at Dell Technologies World provide a glimpse into a world where AI infrastructure and services benefits both customers and employees. The challenge now is to do this sustainably and ethically at scale.
    #how #dells #infrastructure #updates #deliver
    How Dell’s AI Infrastructure Updates Deliver Choice, Control And Scale
    Dell Technologies World focuses its keynote on Inventing the Future with AIDell Technologies Dell Technologies unveiled a significant expansion of its Dell AI Factory platform at its annual Dell Technologies World conference today, announcing over 40 product enhancements designed to help enterprises deploy artificial intelligence workloads more efficiently across both on-premises environments and cloud systems. The Dell AI Factory is not a physical manufacturing facility but a comprehensive framework combining advanced infrastructure, validated solutions, services, and an open ecosystem to help businesses harness the full potential of artificial intelligence across diverse environments—from data centers and cloud to edge locations and AI PCs. The company has attracted over 3,000 AI Factory customers since launching the platform last year. In an earlier call with industry analysts, Dell shared research stating that 79% of production AI workloads are running outside of public cloud environments—a trend driven by cost, security, and data governance concerns. During the keynote, Michael Dell provided more color on the value of Dell’s AI factory concept. He said, "The Dell AI factory is up to 60% more cost effective than the public cloud, and recent studies indicate that about three-fourths of AI initiatives are meeting or exceeding expectations. That means organizations are driving ROI and productivity gains from 20% to 40% in some cases. Making AI Easier to Deploy Organizations need the freedom to run AI workloads wherever makes the most sense for their business, without sacrificing performance or control. While IT leaders embraced the public cloud for their initial AI services, many organizations are now looking for a more nuanced approach where the company can control over their most critical AI assets while maintaining the flexibility to use cloud resources when appropriate. Over 80 percent of the companies Lopez Research interviewed said they struggled to find the budget and technical talent to deploy AI. These AI deployment challenges have only increased as more AI models and AI infrastructure services were launched. Silicon Diversity and Customer Choice A central theme of Dell's AI Factory message is how Dell makes AI easier to deploy while delivering choice. Dell is offering customers choice through silicon diversity in its designs, but also with ISV models. The company announced it has added Intel to its AI Factory portfolio with Intel Gaudi 3 AI accelerators and Intel Xeon processors, with a strong focus on inferencing workloads. Dell also announced its fourth update to the Dell AI Platform with AMD, rolling out two new PowerEdge servers—the XE9785 and the XE9785L—equipped with the latest AMD Instinct MI350 series GPUs. The Dell AI Factory with NVIDIA combines Dell's infrastructure with NVIDIA's AI software and GPU technologies to deliver end-to-end solutions that can reduce setup time by up to 86% compared to traditional approaches. The company also continues to strengthen its partnership with NVIDIA, announcing products that leverage NVIDIA's Blackwell family and other updates launched at NVIDIA GTC. As of today, Dell supports choice by delivering AI solutions with all of the primary GPU and AI accelerator infrastructure providers. Client-Side AI Advancements At the edge of the AI Factory ecosystem, Dell announced enhancements to the Dell Pro Max in a mobile form factor, leveraging Qualcomm’s AI 100 discrete NPUs designed for AI engineers and data scientists who need fast inferencing capabilities. With up to 288 TOPs at 16-bit floating point precision, these devices can power up to a 70-billion parameter model, delivering 7x the inferencing speed and 4x the accuracy over a 40 TOPs NPU. According to Dell, the Pro Max Plus line can run a 109-billion parameter AI model. The Dell Pro Max Plus targets AI engineers and data scientist with with the ability to run a ... More 109-billion parameter modelDell Technologies The Pro Max and Plus launches follow Dell's previous announcement of AI PCs featuring Dell Pro Max with GB 10 and GB 300 processors powered by NVIDIA's Grace Blackwell architecture. Overall, Dell has simplified its PC portfolio but made it easier for customers to choose the right system for their workloads by providing the latest chips from AMD, Intel, Nvidia, and Qualcomm. On-Premise AI Deployment Gains Ecosystem Momentum Following the theme of choice, organizations need the flexibility to run AI workloads on-premises and in the cloud. Dell is making significant strides in enabling on-premise AI deployments with major software partners. The company announced it is the first provider to bring Cohere capabilities on-premises, combining Cohere's generative AI models with Dell's secure, scalable infrastructure for turnkey enterprise solutions. Similar partnerships with Mistral and Glean were also announced, with Dell facilitating their first on-premise deployments. Additionally, Dell is supporting Google's Gemini on-premises with Google Distributed Cloud. To simplify model deployment, Dell now offers customers the ability to choose models on Hugging Face and deploy them in an automated fashion using containers and scripts. Enterprises increasingly recognize that while public cloud AI has its place, a hybrid AI infrastructure approach could deliver better economics and security for production workloads. The imperative for scalable yet efficient AI infrastructure at the edge is a growing need. As Michael Dell said during his Dell Technologies World keynote, “Over 75% of enterprise data will soon be created and processed at the edge, and AI will follow that data; it's not the other way around. The future of AI will be decentralized, low latency, and hyper-efficient.” Dell's ability to offer robust hybrid and fully on-premises solutions for AI is proving to be a significant advantage as companies increasingly seek on-premises support and even potentially air-gapped solutions for their most sensitive AI workloads. Key industries adopting the Dell AI Factory include finance, retail, energy, and healthcare providers. Scaling AI Requires a Focus on Energy Efficiency Simplifying AI also requires product innovations that deliver cost-effective, energy-efficient technology. As AI workloads drive unprecedented power consumption, Dell has prioritized energy efficiency in its latest offerings. The company introduced the Dell PowerCool Enclosed Rear Door Heat Exchangerwith Dell Integrated Rack Controller. This new cooling solution captures nearly 100% of the heat coming from GPU-intensive workloads. This innovation lowers cooling energy requirements for a rack by 60%, allowing customers to deploy 16% more racks with the same power infrastructure. Dell's new systems are rated to operate at 32 to 37 degrees Celsius, supporting significantly warmer temperatures than traditional air-cooled or water-chilled systems, further reducing power consumption for cooling. The PowerEdge XE9785L now offers Dell liquid cooling for flexible power management. Even if a company isn't aiming for a specific sustainability goal, every organization wants to improve energy utilization. Early Adopter Use Cases Highlight AI's Opportunity With over 200 product enhancements to its AI Factory in just one year, Dell Technologies is positioning itself as a central player in the rapidly evolving enterprise AI infrastructure market. It offers the breadth of solutions and expertise organizations require to successfully implement production-grade AI systems in a secure and scalable fashion. However, none of this technology matters if enterprises can't find a way to create business value by adopting it. Fortunately, examples from the first wave of enterprise early adopters highlight ways AI can deliver meaningful returns in productivity and customer experience. Let's look at two use cases presented at Dell Tech World.Michael Dell, CEO of Dell Technologies, interviews Larry Feinsmith, the Managing Director and Head ... More of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase at Dell Technologies WorldDell Technologies The Power of LLMs in Finance at JPMorgan Chase JPMorgan Chase took the stage to make AI real from a customer’s perspective. The financial firm uses Dell's compute hardware, software-defined storage, client, and peripheral solutions. Larry Feinsmith, the Managing Director and Head of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase, said, "We have a hybrid, multi-cloud, multi-provider strategy. Our private cloud is an incredibly strategic asset for us. We still have many applications and data on-premises for resiliency, latency, and a variety of other benefits." Feinsmith also spoke of the company's Large Language Modelstrategy. He said, “Our strategy is to use a constellation of models, both foundational and open, which requires a tremendous amount of compute in our data centers, in the public cloud, and, of course, at the edge. The one constant thing, whether you're training models, fine-tuning models, or finding a great use case that has large-scale inferencing, is that they all will drive compute. We think Dell is incredibly well positioned to help JPMorgan Chase and other companies in their AI journey.” Feinsmith noted that using AI isn't new for JPMorgan Chase. For over a decade, JPMorgan Chase has leveraged various types of AI, such as machine learning models for fraud detection, personalization, and marketing operations. The company uses what Feinsmith called its LLM suite, which over 200,000 people at JPMorgan Chase use today. The generative AI application is used for QA summarization and content generation using JPMorgan Chase's data in a highly secure way. Next, it has used the LLM suite architecture to build applications for its financial advisors, contact center agents, and any employee interacting with its clients. Its third use case highlighted changes in the software development area. JPMorgan Chase rolled out code generation AI capabilities to over 40,000 engineers. It has achieved as much as 20% productivity in the code creation and expects to leverage AI throughout the software development life cycle. Going forward, the financial firm expects to use AI agents and reasoning models to execute complex business processes end-to-end.Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's shares the retailers AI ... More strategy at Dell Technologies WorldDell Technologies. How AI Makes It Easier For Employees to Serve Customers at Lowe's Lowe's Home Improvement Stores provided another example of how companies are leveraging Dell Technology and AI to transform the customer and employee experience. Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's, shared insights on designing the strategy for AI when she said, "How should we deploy AI? We wanted to do impactful and meaningful things. We did not want to die a death of 1000 pilots, and we organized our efforts across how we sell, how we shop, and how we work. How we sell was for our associates. How we shop is for our customers, and how we work is for our headquarters employees. For whatever reason, most companies have begun with their workforce in the headquarters. We said, No, we are going to put AI in the hands of 300,000 associates." For example, she described a generative AI companion app for store associates. "Every store associate now has on his or her zebra device a ChatGPT-like experience for home improvement.", said Godbole. Lowe's is also deploying computer vision algorithms at the edge to understand issues such as whether a customer in a particular aisle is waiting for help. The system will then send notifications to the associates in that department. Customers can also ask various home improvement questions, such as what paint finish to use in a bathroom, at Lowes.com/AI. Designing A World Where AI Infrastructure Delivers Human Opportunity Michael Dell said, "We are entering the age of ubiquitous intelligence, where AI becomes as essential as electricity, with AI, you can distill years of experience into instant insights, speeding up decisions and uncovering patterns in massive data. But it’s not here to replace humans. AI is a collaborator that frees your teams to do what they do best, to innovate, to imagine, and to solve the world's toughest problems." While there are many AI deployment challenges ahead, the customer examples shared at Dell Technologies World provide a glimpse into a world where AI infrastructure and services benefits both customers and employees. The challenge now is to do this sustainably and ethically at scale. #how #dells #infrastructure #updates #deliver
    How Dell’s AI Infrastructure Updates Deliver Choice, Control And Scale
    www.forbes.com
    Dell Technologies World focuses its keynote on Inventing the Future with AIDell Technologies Dell Technologies unveiled a significant expansion of its Dell AI Factory platform at its annual Dell Technologies World conference today, announcing over 40 product enhancements designed to help enterprises deploy artificial intelligence workloads more efficiently across both on-premises environments and cloud systems. The Dell AI Factory is not a physical manufacturing facility but a comprehensive framework combining advanced infrastructure, validated solutions, services, and an open ecosystem to help businesses harness the full potential of artificial intelligence across diverse environments—from data centers and cloud to edge locations and AI PCs. The company has attracted over 3,000 AI Factory customers since launching the platform last year. In an earlier call with industry analysts, Dell shared research stating that 79% of production AI workloads are running outside of public cloud environments—a trend driven by cost, security, and data governance concerns. During the keynote, Michael Dell provided more color on the value of Dell’s AI factory concept. He said, "The Dell AI factory is up to 60% more cost effective than the public cloud, and recent studies indicate that about three-fourths of AI initiatives are meeting or exceeding expectations. That means organizations are driving ROI and productivity gains from 20% to 40% in some cases. Making AI Easier to Deploy Organizations need the freedom to run AI workloads wherever makes the most sense for their business, without sacrificing performance or control. While IT leaders embraced the public cloud for their initial AI services, many organizations are now looking for a more nuanced approach where the company can control over their most critical AI assets while maintaining the flexibility to use cloud resources when appropriate. Over 80 percent of the companies Lopez Research interviewed said they struggled to find the budget and technical talent to deploy AI. These AI deployment challenges have only increased as more AI models and AI infrastructure services were launched. Silicon Diversity and Customer Choice A central theme of Dell's AI Factory message is how Dell makes AI easier to deploy while delivering choice. Dell is offering customers choice through silicon diversity in its designs, but also with ISV models. The company announced it has added Intel to its AI Factory portfolio with Intel Gaudi 3 AI accelerators and Intel Xeon processors, with a strong focus on inferencing workloads. Dell also announced its fourth update to the Dell AI Platform with AMD, rolling out two new PowerEdge servers—the XE9785 and the XE9785L—equipped with the latest AMD Instinct MI350 series GPUs. The Dell AI Factory with NVIDIA combines Dell's infrastructure with NVIDIA's AI software and GPU technologies to deliver end-to-end solutions that can reduce setup time by up to 86% compared to traditional approaches. The company also continues to strengthen its partnership with NVIDIA, announcing products that leverage NVIDIA's Blackwell family and other updates launched at NVIDIA GTC. As of today, Dell supports choice by delivering AI solutions with all of the primary GPU and AI accelerator infrastructure providers. Client-Side AI Advancements At the edge of the AI Factory ecosystem, Dell announced enhancements to the Dell Pro Max in a mobile form factor, leveraging Qualcomm’s AI 100 discrete NPUs designed for AI engineers and data scientists who need fast inferencing capabilities. With up to 288 TOPs at 16-bit floating point precision, these devices can power up to a 70-billion parameter model, delivering 7x the inferencing speed and 4x the accuracy over a 40 TOPs NPU. According to Dell, the Pro Max Plus line can run a 109-billion parameter AI model. The Dell Pro Max Plus targets AI engineers and data scientist with with the ability to run a ... More 109-billion parameter modelDell Technologies The Pro Max and Plus launches follow Dell's previous announcement of AI PCs featuring Dell Pro Max with GB 10 and GB 300 processors powered by NVIDIA's Grace Blackwell architecture. Overall, Dell has simplified its PC portfolio but made it easier for customers to choose the right system for their workloads by providing the latest chips from AMD, Intel, Nvidia, and Qualcomm. On-Premise AI Deployment Gains Ecosystem Momentum Following the theme of choice, organizations need the flexibility to run AI workloads on-premises and in the cloud. Dell is making significant strides in enabling on-premise AI deployments with major software partners. The company announced it is the first provider to bring Cohere capabilities on-premises, combining Cohere's generative AI models with Dell's secure, scalable infrastructure for turnkey enterprise solutions. Similar partnerships with Mistral and Glean were also announced, with Dell facilitating their first on-premise deployments. Additionally, Dell is supporting Google's Gemini on-premises with Google Distributed Cloud. To simplify model deployment, Dell now offers customers the ability to choose models on Hugging Face and deploy them in an automated fashion using containers and scripts. Enterprises increasingly recognize that while public cloud AI has its place, a hybrid AI infrastructure approach could deliver better economics and security for production workloads. The imperative for scalable yet efficient AI infrastructure at the edge is a growing need. As Michael Dell said during his Dell Technologies World keynote, “Over 75% of enterprise data will soon be created and processed at the edge, and AI will follow that data; it's not the other way around. The future of AI will be decentralized, low latency, and hyper-efficient.” Dell's ability to offer robust hybrid and fully on-premises solutions for AI is proving to be a significant advantage as companies increasingly seek on-premises support and even potentially air-gapped solutions for their most sensitive AI workloads. Key industries adopting the Dell AI Factory include finance, retail, energy, and healthcare providers. Scaling AI Requires a Focus on Energy Efficiency Simplifying AI also requires product innovations that deliver cost-effective, energy-efficient technology. As AI workloads drive unprecedented power consumption, Dell has prioritized energy efficiency in its latest offerings. The company introduced the Dell PowerCool Enclosed Rear Door Heat Exchanger (eRDHx) with Dell Integrated Rack Controller (IRC). This new cooling solution captures nearly 100% of the heat coming from GPU-intensive workloads. This innovation lowers cooling energy requirements for a rack by 60%, allowing customers to deploy 16% more racks with the same power infrastructure. Dell's new systems are rated to operate at 32 to 37 degrees Celsius, supporting significantly warmer temperatures than traditional air-cooled or water-chilled systems, further reducing power consumption for cooling. The PowerEdge XE9785L now offers Dell liquid cooling for flexible power management. Even if a company isn't aiming for a specific sustainability goal, every organization wants to improve energy utilization. Early Adopter Use Cases Highlight AI's Opportunity With over 200 product enhancements to its AI Factory in just one year, Dell Technologies is positioning itself as a central player in the rapidly evolving enterprise AI infrastructure market. It offers the breadth of solutions and expertise organizations require to successfully implement production-grade AI systems in a secure and scalable fashion. However, none of this technology matters if enterprises can't find a way to create business value by adopting it. Fortunately, examples from the first wave of enterprise early adopters highlight ways AI can deliver meaningful returns in productivity and customer experience. Let's look at two use cases presented at Dell Tech World.Michael Dell, CEO of Dell Technologies, interviews Larry Feinsmith, the Managing Director and Head ... More of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase at Dell Technologies WorldDell Technologies The Power of LLMs in Finance at JPMorgan Chase JPMorgan Chase took the stage to make AI real from a customer’s perspective. The financial firm uses Dell's compute hardware, software-defined storage, client, and peripheral solutions. Larry Feinsmith, the Managing Director and Head of Global Tech Strategy, Innovation & Partnerships at JPMorgan Chase, said, "We have a hybrid, multi-cloud, multi-provider strategy. Our private cloud is an incredibly strategic asset for us. We still have many applications and data on-premises for resiliency, latency, and a variety of other benefits." Feinsmith also spoke of the company's Large Language Model (LLM) strategy. He said, “Our strategy is to use a constellation of models, both foundational and open, which requires a tremendous amount of compute in our data centers, in the public cloud, and, of course, at the edge. The one constant thing, whether you're training models, fine-tuning models, or finding a great use case that has large-scale inferencing, is that they all will drive compute. We think Dell is incredibly well positioned to help JPMorgan Chase and other companies in their AI journey.” Feinsmith noted that using AI isn't new for JPMorgan Chase. For over a decade, JPMorgan Chase has leveraged various types of AI, such as machine learning models for fraud detection, personalization, and marketing operations. The company uses what Feinsmith called its LLM suite, which over 200,000 people at JPMorgan Chase use today. The generative AI application is used for QA summarization and content generation using JPMorgan Chase's data in a highly secure way. Next, it has used the LLM suite architecture to build applications for its financial advisors, contact center agents, and any employee interacting with its clients. Its third use case highlighted changes in the software development area. JPMorgan Chase rolled out code generation AI capabilities to over 40,000 engineers. It has achieved as much as 20% productivity in the code creation and expects to leverage AI throughout the software development life cycle. Going forward, the financial firm expects to use AI agents and reasoning models to execute complex business processes end-to-end.Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's shares the retailers AI ... More strategy at Dell Technologies WorldDell Technologies. How AI Makes It Easier For Employees to Serve Customers at Lowe's Lowe's Home Improvement Stores provided another example of how companies are leveraging Dell Technology and AI to transform the customer and employee experience. Seemantini Godbole, EVP and Chief Digital and Information Officer at Lowe's, shared insights on designing the strategy for AI when she said, "How should we deploy AI? We wanted to do impactful and meaningful things. We did not want to die a death of 1000 pilots, and we organized our efforts across how we sell, how we shop, and how we work. How we sell was for our associates. How we shop is for our customers, and how we work is for our headquarters employees. For whatever reason, most companies have begun with their workforce in the headquarters. We said, No, we are going to put AI in the hands of 300,000 associates." For example, she described a generative AI companion app for store associates. "Every store associate now has on his or her zebra device a ChatGPT-like experience for home improvement.", said Godbole. Lowe's is also deploying computer vision algorithms at the edge to understand issues such as whether a customer in a particular aisle is waiting for help. The system will then send notifications to the associates in that department. Customers can also ask various home improvement questions, such as what paint finish to use in a bathroom, at Lowes.com/AI. Designing A World Where AI Infrastructure Delivers Human Opportunity Michael Dell said, "We are entering the age of ubiquitous intelligence, where AI becomes as essential as electricity, with AI, you can distill years of experience into instant insights, speeding up decisions and uncovering patterns in massive data. But it’s not here to replace humans. AI is a collaborator that frees your teams to do what they do best, to innovate, to imagine, and to solve the world's toughest problems." While there are many AI deployment challenges ahead, the customer examples shared at Dell Technologies World provide a glimpse into a world where AI infrastructure and services benefits both customers and employees. The challenge now is to do this sustainably and ethically at scale.
    0 Comments ·0 Shares ·0 Reviews
  • $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again

    PC Builds * Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: +Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid for Origin PC’s 5090-powered Genesis when it launched, or after taxes. Today, a similar build has a list price of Markup is to over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost – and that’s after taxes and a discount off the initial sticker price of We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was. It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior, chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFICPURyzen 7 9800X3DGraphics CardNVIDIA RTX 5090 Founders EditionRAMCorsair Vengeance DDR5-6000SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSDCustom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, FittingsFans12x Corsair iCUE LINK RX120 120mm FanCaseCorsair 7000D AirflowPSUCorsair RM1200x SHIFT 80+ Gold PSURGB/Fan Controller2x Corsair iCUE Link System HubOperating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025We’ll price it out based on the original, pre-tariff build before taxes and with a 10% off promo. Keep in mind that the new price is to depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost making for a markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around -markup at the high end. In order for this system to be “worth” more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to or more, so the markup with that instead would be if we assume a 5090 costs That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of you can spend another and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interestingsystem that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55Cto not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was after taxes, we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables, an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs after the discount, off the new sticker price of That’s an increase of over making the premium over current DIY pricing roughly -Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed.
    #disaster #prebuilt #corsair #ampamp #origin
    $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again
    PC Builds * Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: +Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid for Origin PC’s 5090-powered Genesis when it launched, or after taxes. Today, a similar build has a list price of Markup is to over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost – and that’s after taxes and a discount off the initial sticker price of We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was. It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior, chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFICPURyzen 7 9800X3DGraphics CardNVIDIA RTX 5090 Founders EditionRAMCorsair Vengeance DDR5-6000SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSDCustom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, FittingsFans12x Corsair iCUE LINK RX120 120mm FanCaseCorsair 7000D AirflowPSUCorsair RM1200x SHIFT 80+ Gold PSURGB/Fan Controller2x Corsair iCUE Link System HubOperating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025We’ll price it out based on the original, pre-tariff build before taxes and with a 10% off promo. Keep in mind that the new price is to depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost making for a markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around -markup at the high end. In order for this system to be “worth” more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to or more, so the markup with that instead would be if we assume a 5090 costs That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of you can spend another and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interestingsystem that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55Cto not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was after taxes, we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables, an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs after the discount, off the new sticker price of That’s an increase of over making the premium over current DIY pricing roughly -Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed. #disaster #prebuilt #corsair #ampamp #origin
    $8000* Disaster Prebuilt PC - Corsair & Origin Fail Again
    gamersnexus.net
    PC Builds $8000* Disaster Prebuilt PC - Corsair & Origin Fail AgainMay 19, 2025Last Updated: 2025-05-19We test Origin's expensive PC’s thermals, acoustics, power, frequency, and perform a tear-downThe HighlightsOur Origin Genesis PC comes with an RTX 5090, 9800X3D, and 32GB of system memoryDue to poor system thermals, the memory on the GPU fails our testingThe fans in the system don’t ramp up until the liquid-cooled CPU gets warm, which means the air-cooled GPU temperature suffersOriginal MSRP: $6,050+Release Date: January 2025Table of ContentsAutoTOC Our fully custom 3D Emblem Glasses celebrate our 15th Anniversary! We hand-assemble these on the East Coast in the US with a metal badge, strong adhesive, and high-quality pint glass. They pair excellently with our 3D 'Debug' Drink Coasters. Purchases keep us ad-free and directly support our consumer-focused reviews!IntroWe paid $6,050 for Origin PC’s 5090-powered Genesis when it launched, or $6,500 after taxes. Today, a similar build has a list price of $8,396. Markup is $1,700 to $2,500 over DIY. This computer costs as much as an RTX Pro 6000, or a used car, or a brand new Kia Rio with a lifetime warranty in 2008 with passenger doors that fall off…The point is, this is expensive, and it also sucks.Editor's note: This was originally published on May 16, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsTest Lead, Host, WritingSteve BurkeVideo Editing, CameraMike GaglioneTesting, WritingJeremy ClaytonCameraTim PhetdaraWriting, Web EditingJimmy ThangThe RTX 5090 is the most valuable thing in this for its 32GB of VRAM, and to show you how much they care about the only reason you’d buy this prebuilt, Origin incinerates the memory at 100 degrees Celsius by choosing to not spin the fans for 8 minutes while under load. The so-called “premium” water cooling includes tubes made out of discolored McDonald’s toy plastic that was left in the sun too long, making it look old, degraded, and dirty.But there are some upsides for this expensive computer. For example, it’s quiet, to its credit, mostly because the fans don’t spin…for 8 minutes.OverviewOriginally, this Origin Genesis pre-built cost $6,488 – and that’s after taxes and a $672 discount off the initial sticker price of $6,722. We ordered it immediately after the RTX 5090 launch, which turned out to be one of the only reliable ways to actually get a 5090 with supply as bad as it was (and continues to be). It took a while to come in, but it did arrive in the usual Origin crate.We reviewed one of these a couple years ago that was a total disaster of a combo. The system had a severely underclocked CPU, ridiculously aggressive fan behavior (which is the opposite of the system we’re reviewing today), chipped paint, and a nearly unserviceable hardline custom liquid cooling loop. Hopefully this one has improved. And hopefully isn’t 1GHz below spec.Parts and PriceOrigin PC RTX 5090 + 9800X3D "Genesis" Part Prices | GamersNexusPart NameRetail Price 4/25MotherboardMSI PRO B650-P WIFI$190CPURyzen 7 9800X3D$480Graphics CardNVIDIA RTX 5090 Founders Edition$2,000RAMCorsair Vengeance DDR5-6000 (2x16GB)$93SSD 1Corsair MP600 CORE XT 1TB PCIe 4 M.2 SSD$70Custom Loop"Hydro X iCUE LINK Cooling" / Pump, Rad, Block, Fittings$712Fans12x Corsair iCUE LINK RX120 120mm Fan$360CaseCorsair 7000D Airflow$240PSUCorsair RM1200x SHIFT 80+ Gold PSU$230RGB/Fan Controller2x Corsair iCUE Link System Hub$118Operating SystemWindows 11N/AT-ShirtORIGIN PC T-ShirtN/AMousepadORIGIN PC Mouse PadN/AShipping"ORIGIN Maximum Protection Shipping Process: ORIGIN Wooden Crate Armor"N/A???"The ORIGIN Difference: Unrivaled Quality & Performance"PricelessTotal retail cost of all parts as of April 2025$4,493We’ll price it out based on the original, pre-tariff $6,050 build before taxes and with a 10% off promo. Keep in mind that the new price is $7,500 to $8,400, depending on when you buy.The good news is that nothing is proprietary – all of its parts are standard. The bad news is that this means we can directly compare it to retail parts which, at the time we wrote this piece, would cost $4,493, making for a $1,557 markup compared to the pre-tax subtotal. That’s a huge amount to pay for someone to screw the parts together. Given the price of the system, the MSI PRO B650-P WIFI motherboard and 1TB SSD are stingy and the 7000D Airflow case is old at this point. The parts don’t match the price.Just two months after we ordered and around when it finally arrived, Origin now offers a totally different case and board with the Gigabyte X870E Aorus Elite. The base SSD is still just 1TB though – only good enough for roughly two or three full Call of Duty installs. The detailed packing sheet lists 22 various water cooling fittings, but, curiously, the build itself only has 15, plus one more in the accessory kit, making it 16 by our count. We don’t know how Origin got 22 here, but it isn’t 22. Hopefully we weren’t charged for 22. Oh, and it apparently comes with “1 Integrated High-Definition.” Good. That’s good. We wouldn’t want 0 integrated high definitions.Similar to last time, you also get “The ORIGIN Difference: Unrivaled Quality & Performance” as a line item. Putting intangible, unachievable promises on the literal receipt is the Origin way: Origin’s quality is certainly rivaled.Against DIY, pricing is extreme and insane as an absolute dollar amount when the other SIs are around $500-$800 markup at the high end. In order for this system to be “worth” $1,500 more than DIY, it would need to be immaculate and it’s not. The only real value the PC offers is the 5090. Finding a 5090 Founders Edition now for $2,000 is an increasingly unlikely scenario. Lately, price increases with scarcity and tariffs have resulted in 5090s closer to $2,800 or more, so the markup with that instead would be $777 if we assume a 5090 costs $2,800. That’s still a big markup, and the motherboard is still disappointing, the tubes are still discolored, the SSD is too small, and it still has problems with the fans not properly spinning, but it’s less insane.Build QualityGetting into the parts choices:This new Genesis has a loop that’s technically set up better than the last one, but it only cools the CPU. That means we have a $6,500 computer with water cooling, but only on the coolest of the two silicon parts -- the one that pulls under 150W. That leaves the 575W RTX 5090 FE to fend for itself, and that doesn’t always go well.Originally, Origin didn’t have the option to water cool the 5090. It’s just a shame that Origin isn’t owned by a gigantic PC hardware company that manufactures its own water cooling components and even has its own factories and is publicly traded and transacts billions of dollars a year to the point that it might have had enough access to make a block... A damn shame. Maybe we’ll buy from a bigger company next time.At least now, with the new sticker price of $8,400, you can spend another $200 and add a water block to the GPU. Problem solved -- turns out, we just needed to spend even more money. Here’s a closer look at Origin’s “premium” cooling solution, complete with saggy routing that looks deflated and discolored tubing that has that well-hydrated catheter tube coloring to it.The fluid is clean and the contents of the block are fine, but the tubing is the problem. In fact, the included drain tube is the correct coloring, making it even more obvious how discolored the loop is.Corsair says its XT Softline tubing is “UV-resistant tubing made to withstand the test of time without any discoloration or deforming.”So clearly something is wrong. Or not “clearly,” actually, seeing as it’s not clear. The tubing looks gross. It shouldn’t look gross. The spare piece in the accessory kit doesn’t look gross. The coolant is even Corsair’s own XL8 clear fluid, making it even more inexcusable.We’re not the only ones to have this problem, though – we found several posts online with the same issue and very little in the way of an official response from Corsair or Origin. We only saw one reply asking the user to contact support.Even without the discoloration, it comes off as looking amateurish from the way it just hangs around the inside of the case. There’s not a lot you can do about long runs of flexible tubing, unless maybe you’re the one building it and have complete control of everything in the pipeline... There is one thing we can compliment about the loop: Origin actually added a ball valve at the bottom underneath the pump for draining and maintenance, which is something that we directly complained about on the previous Origin pre-built. We’re glad to see that get addressed.The fans in the build are part of Corsair’s relatively new LINK family, so they’re all daisy chained together with a single USB-C-esque cable and controlled together in tandem by two of Corsair’s hubs. It’s an interesting (if expensive) system that extends to include the pump and CPU block – both of which have liquid temperature sensors.Tear-down Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)We’re starting the tear-down by looking at the cable management side. Opening up the swinging side panel, we noticed masking tape on the dust filter, which we’re actually okay with as it’s to keep it in place during shipping and is removable.  Internally, they’ve included all of the unused PSU cables in the system’s accessories box, which we’ll talk more about down below. The cable routing makes sense and is generally well managed. While they tied the cables together, not all of the ties were tied down to the chassis. The system uses the cable management channel for the 24-pin connector. Overall, it’s clean and they’ve done well here. Looking at the other side of the system, we can see that the power cable leading into the 5090 is mostly seated, and isn’t a concern to us. Removing the water block’s cable, it had a little piece of plastic which acted as a pull tab. That’s actually kind of nice.Removing the screws on the water block reveal that they are captive, which is nice. Looking at the pattern, we can see that they used pre-applied paste via a silk screen. That allowed contact for all 8 legs of the IHS, which looked good with overall even pressure. The block application was also good. Looking at how well all of the cables were seated, everything was fine from the CPU fan header down to the front panel connectors. Removing the heat sync off the NVMe SSD, we didn’t see any plastic on the thermal pad, which is good. Look at the 16GB DDR 6000 RAM modules, they are in the correct slots and Origin outfitted them with Corsair 36-44-44-96 sticks, which are not the greatest timings. Examining the tightness of all the screws on the motherboard, we didn’t encounter any loose screws. Removing the motherboard from the case, everything looked fine. Looking at the motherboard out of the case, it’s a lower-end board than we’d like to see out of a premium system. Looking at the fans, they are immaculately installed, which is partially due to how they’re connected together. This results in a very clean setup.  The back side of the PC has a massive radiator. And overall, the system has very clean cable management and the assembly was mostly good. This relegates the system’s biggest issues being the value and its water-cooling setup. We didn’t drain the loop so we’re going to keep running it and see what it looks like down the road. Thermal BenchmarksSystem Thermals at Steady StateGetting into the benchmarking, we’ll start with thermals.Right away, the 96-degree result on the memory junction is a problem -- especially because this is an average, which means we have spikes periodically to 100 degrees. The technical rating on this memory is 105 degrees for maximum safety spec. This is getting way too close and is hotter than what we saw in our 5090 FE review. This is also when all of the thermal pads are brand new. The Origin pre-built uses a large case with 12 fans, so it should be impossible for the GPU to be this hot. The Ryzen 9800X3D hit 87C at steady-state – which is also not great for how much cooling is in this box. All of the various motherboard and general system temperature sensors fell well within acceptable ranges.Finally, the watercooling parts provide a couple of liquid temperatures. The pump is on the “cool” side of the loop and read 36.7C at steady state, while the coolant in the block on the “hot” side of the loop got up to 41.3C. You typically want liquid temperature to stay under 55C (at the most) to not violate spec on the pump and tubing, so this is fine.We need to plot these over time to uncover some very strange behavior.CPU Temperature vs. Fan Speeds Over TimeCPU temperature during the test starts out on a slow ramp upwards during the idle period. When the CPU load first starts, we see an immediate jump to about 72C, a brief drop, then a long and steady rise from roughly 250 seconds to 750 seconds into the test where it levels off at the 87C mark. The VRM temperature follows the same general curve, but takes longer to reach steady-state. Adding the liquid temperatures to the chart shows the same breakpoints.Finally, adding pump and fan speeds gives us the big reveal for why the curves look like this. The pump stair steps up in speed while the temperatures rise, but the fans don’t even turn on for over 8 minutes into the load’s runtime. Once they’re actually running, they average out to just 530RPM, which is so slow that they might as well be off.This is an awful configuration. Response to liquid temperature isn’t new, but this is done without any thought whatsoever. If you tie all fans to liquid temperature, and if you have parts not cooled by liquid like VRAM on the video card, then you’re going to have a bad time. And that’s the next chart. But before that one, this is an overcorrection from how Origin handled the last custom loop PC we reviewed from the company, which immediately ramped the fans up high as it could as soon as the CPU started doing anything. Maybe now they can find a middle ground since we’ve found the two extremes of thoughtless cooling.GPU Temperature vs. Fan Speeds Over TimeThis chart shows GPU temperatures versus GPU fan speed.The GPU temperature under load rises to around 83C before coming back down when the case fans finally kick on. As a reminder, 83-84 degrees is when NVIDIA starts hard throttling the clocks more than just from GPU Boost, so they’re dropping clocks as a result of this configuration.The 5090’s VRAM already runs hot on an open bench – 89 to 90 degrees Celsius – and that gets pushed up to peak at 100C in the Origin pre-built. This is unacceptable. Adding the GPU fan speed to the chart shows us how the Founders Edition cooler attempts to compensate by temporarily boosting fan speed to 56% during this time, which also means that Origin isn’t even benefiting as much from the noise levels as it should from the slower fans. Balancing them better would benefit noise more.As neat of a party trick as it is to have the case fans stay off unless they’re needed in the loop, Origin should have kept at least one or two running at all times, like rear exhaust, to give the GPU some help. Besides, letting the hot air linger could potentially encourage local hot spots to form on subcomponents that aren’t directly monitored, which can lead to problems.Power At The WallNow we’ll look at full system load power consumption by logging it at the wall – so everything, even efficiency losses from the PSU, is taken into account.Idle, it pulled a relatively high 125W. At the 180 second mark, the CPU load kicks in. There’s a jump at 235 seconds when the GPU load kicks in.We see a slight ramp upwards in power consumption after that, which tracks with increasing leakage as the parts heat up, before settling in at an average of 884W at steady state. AcousticsNext we’ll cover dBA over time as measured in our hemi-anechoic chamber.At idle, the fans are off, which makes for a functionally silent system at the noise floor. The first fans to come on in the system are on the GPU, bringing noise levels up to a still-quiet range of 25-28dBA at 1 meter. The loudest point is 30.5 dBA when the GPU fans briefly ramp and before system fans kick in. CPU Frequency vs. Original ReviewFor CPU frequency, fortunately for Origin, it didn’t randomly throttle it by 1GHz this time. The 9800X3D managed to stay at 5225MHz during the CPU-only load portion of torture test – the same frequency that we recorded in our original review for the CPU so that’ good. At steady state with the GPU dumping over 500W of heat into the case, the average core frequency dropped by 50MHz. If Origin made better use of its dozen or so fans, it should hold onto more of that frequency. BIOS ConfigurationBIOS for the Origin pre-built is set up sensibly, at least. The build date is January 23, which was the latest available in the time between when we ordered the system at the 50 series launch and when the system was actually assembled.Scrutinizing the chosen settings revealed nothing out of line. The DDR5-6000 memory profile was enabled and the rest of the core settings were properly set to Auto. This was all fine.Setup and SoftwareThe Windows install was normal with no bloatware. That’s also good.The desktop had a few things on it. A “Link Windows 10 Key to Microsoft Account” PDF is helpful for people who don’t know what to do if their system shows the Activate Windows watermark. Confusingly, it hasn’t been updated to say “11” instead of “10.” It also shepherds the user towards using a Microsoft account. That’s not necessarily a bad thing, but we don’t like how it makes it seem necessary because it’s not and you shouldn’t. There’s also an “Origin PC ReadMe” PDF that doesn’t offer much except coverage for Origin’s ass with disclaimers and points of contact for support. One useful thing is that it points the user to “C:\\ORIGIN PC” to find “important items.”That folder has Origin branded gifs, logos, and wallpapers, as well as CPU-Z, Teamviewer, and a Results folder. Teamviewer is almost certainly for Origin’s support teams to be able to remotely inspect the PC during support calls. It makes sense to have that stuff on there. The results folder contains an OCCT test report that shows a total of 1 hour and 52 minutes of testing. A CPU test for 12 minutes, CPU + RAM, memory, and 3D adaptive tests for 30 minutes each, then finishing with 10 minutes of OCCT’s “power” test, which is a combined full system load. It’s great that Origin actually does testing and provides this log as a baseline for future issues, and just for base expectations. This is good and gives you something to work from. Not having OCCT pre-installed to actually run again for comparison is a support oversight. It’s free for personal use at least, so the user could go download it easily.There weren’t any missing drivers in Device Manager and NVIDIA’s 572.47 driver from February 20 was the latest at the time of the build – both good things. There wasn’t any bundled bloatware installed, so points to Origin for that.iCUE itself isn’t as bad as it used to be, but it’s still clunky, like the preloaded fan profiles not showing their set points. PackagingOn to packaging.The Origin Genesis pre-built came in a massive wooden crate that was big enough for two people to move around. Considering this PC was $6,500 after taxes (at the time), we’re definitely OK with the wooden crate and its QR code opening instructions.Origin uses foam, a fabric cover, a cardboard box within a crate, and the crate for the PC. The case had two packs of expanding foam inside it, allowing the GPU to arrive undamaged and installed. The sticker on the side panel also had clear instructions. These are good things. Unfortunately, there’s a small chip in the paint on top of the case, but not as bad as the last Origin paint issues we had and we think it’s unrelated to the packaging itself.AccessoriesThe accessory kit is basic, and came inside of a box with the overused cringey adage “EAT SLEEP GAME REPEAT” printed on it. Inside are the spare PSU cables (that we’re happy to see included), an AC power cable, stock 5090 FE power adapter, standard motherboard and case accessories, a G1/4 plug tool and extra plugs, and a piece of soft tubing with a fitting on one end that can be used to help drain the cooling loop. All of this is good.Conclusion Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.During this review process, the price went even higher. You already shouldn’t buy this, but just to drive it home:Now, for the same configuration, the Genesis now costs $7,557 after the discount, off the new sticker price of $8,396. That’s an increase of over $1,000, making the premium over current DIY pricing roughly $1,700-$2,500.Now, there are good reasons for the price to go up. Tariffs have a real impact on pricing and we’re going to see it everywhere, and tariffs are also outside of Corsair’s control. We don’t fault them for that. But that doesn’t change the fact that the cost over DIY is so insanely elevated. Even Corsair’s own competitors offer better value than this, like Maingear.At $8,400 sticker price, you’d have to be drunk on whatever is discoloring Origin’s loop to buy it. Nobody should buy this, especially not for gaming. If you’re doing productivity or creative work that would seriously benefit from the 5090’s 32GB of VRAM, then look elsewhere for a better deal. This costs nearly as much as an RTX Pro 6000, which has 96GB of VRAM and is better.It would actually be cheaper to get scalped for a 5090 on Ebay and then buy the whole rest of the computer than to buy this Origin system. That’s how crazy this is.The upcharge, even assuming a 5090 price of $2,800, is just way too high versus other system integrators. Seriously, Alienware is cheaper at this point – by thousands of dollars. Alienware.We can’t recommend this PC. Ignoring the price, the memory on the video card is hitting 100 degrees C in workloads when the fans aren’t turning on because the fans are set to turn on based on the liquid temperature and the liquid doesn’t touch the GPU. For that reason alone, it gets a failing grade. For our thermal testing, pre-builts have to pass the torture test. If they don’t, they instantly fail. That’s how it always works for our pre-built reviews. This system has, unfortunately, instantly failed.
    0 Comments ·0 Shares ·0 Reviews
More Results
CGShares https://cgshares.com