• U.S. Prepares New AI Chip Restrictions to Close Chinas Backdoor Access
    www.wsj.com
    Washington plans rules limiting semiconductor shipments to some countries accused of supplying Beijing.
    0 Comments ·0 Shares ·104 Views
  • 17 Books We Read This Week
    www.wsj.com
    Cinematic disasters, George Gershwins lyrical brother, the rise of college sports and more.
    0 Comments ·0 Shares ·98 Views
  • The Eagle and the Hart and Henry V: Kings, Cousins, Enemies
    www.wsj.com
    The seeds of the Wars of the Roses were planted in the reigns of Richard II, Henry IV and Henry V.
    0 Comments ·0 Shares ·74 Views
  • Bird flu jumps from birds to human in Louisiana; patient hospitalized
    arstechnica.com
    Bad case Bird flu jumps from birds to human in Louisiana; patient hospitalized This is the first human case of bird flu in Louisiana. Beth Mole Dec 13, 2024 5:28 pm | 53 Three colorized H5N1 virus particles (rod-shaped; orange) imaged by an electron microscope. With a couple genetic shifts in H5N1, the US variant could evolve into a more virulent and widespread virus. Credit: CDC/NIAID/Flickr Three colorized H5N1 virus particles (rod-shaped; orange) imaged by an electron microscope. With a couple genetic shifts in H5N1, the US variant could evolve into a more virulent and widespread virus. Credit: CDC/NIAID/Flickr Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA person in Louisiana is hospitalized with H5N1 bird flu after having contact with sick and dying birds suspected of carrying the virus, state health officials announced Friday.It is the first human H5N1 case detected in Louisiana. For now, the case is considered a "presumptive" positive until testing is confirmed by the Centers for Disease Control and Prevention. Health officials say that the risk to the public is low but caution people to stay away from any sick or dead birds.Although the person has been hospitalized, their condition was not immediately reported. It's also unclear what kind of birds the person had contact withwild, backyard, or commercial birds. Ars has reached out to Louisiana's health department and will update this piece with any additional information.The case is just the latest amid H5N1's global and domestic rampage. The virus has been ravaging birds of all sorts in the US since early 2022 and spilling over to a surprisingly wide range of mammals. In March this year, officials detected an unprecedented leap to dairy cows, which has since caused a nationwide outbreak. The virus is currently sweeping through California, the country's largest dairy producer.To date, at least 845 herds across 16 states have contracted the virus since March, including 630 in California, which detected its first dairy infections in late August.Human casesAt least 60 people in the US have been infected amid the viral spread this year. But the new case in Louisiana stands out. To date, nearly all of the human cases have been among poultry and dairy workersunlike the new case in Louisiana and almost all have been mildalso unlike the new case. Most of the cases have involved conjunctivitispink eyeand/or mild respiratory and flu-like symptoms.There was a case in a patient in Missouri who was hospitalized. However, that person had underlying health conditions, and it's unclear if H5N1 was the cause of their hospitalization or merely an incidental finding. It remains unknown how the person contracted the virus. An extensive investigation found no animal or other exposure that could explain the infection.No human-to-human spread of H5N1 has been found in the US.Last month, an otherwise healthy teen in Canada was found to have H5N1 and was hospitalized in critical condition from the infection. It was the first H5N1 human case reported in Canada. Like the case in Missouri, investigators were not able to find an explanation of how the teen contracted the virus. The investigation has since been closed, with no additional cases having been found. Public health officials have stopped providing health updates on the case, citing the closed investigation and patient privacy.Evolving threatInfectious disease experts have recently warned that H5N1 may only need to acquire a small number of mutations to become a greater threat to humans. For example, last week, researchers published a study in Science finding that a single mutation in the H5N1 dairy strain would make it better at latching onto human cells. The more the virus circulates around us, the more opportunities it has to accumulate such mutations and adapt to infect our respiratory tracts and spread from person to person.Influenza viruses are also able to swap genetic segments with each other in a process called reassortment. As flu season begins in the US, a nightmare scenario that experts have raised is if H5N1 swaps segments with the seasonal flu, creating a new, potentially deadly virus with pandemic potential. For this to happen, a person would have to be infected with the two types of influenza viruses at the same timesomething health officials have feared could happen in dairy or poultry workers as the outbreaks continue.While the human cases of H5N1 detected this year have mostly been mild, the virus has a history of more severity. Globally, H5N1 has had a case fatality rate of 49 percent, according to data collected between 2003 and November 2024 by the World Health Organization. Why the US cases have so far been almost entirely mild is an open question.Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 53 Comments
    0 Comments ·0 Shares ·102 Views
  • Yearlong supply-chain attack targeting security pros steals 390K credentials
    arstechnica.com
    EXPLOITING WEAK LINKS Yearlong supply-chain attack targeting security pros steals 390K credentials Multifaceted, high-precision campaign targets malicious and benevolent hackers alike. Dan Goodin Dec 13, 2024 4:46 pm | 9 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA sophisticated and ongoing supply-chain attack operating for the past year has been stealing sensitive login credentials from both malicious and benevolent security personnel by infecting them with Trojanized versions of open source software from GitHub and NPM, researchers said.The campaign, first reported three weeks ago by security firm Checkmarx and again on Friday by Datadog Security Labs, uses multiple avenues to infect the devices of researchers in security and other technical fields. One is through packages that have been available on open source repositories for over a year. They install a professionally developed backdoor that takes pains to conceal its presence. The unknown threat actors behind the campaign have also employed spear phishing that targets thousands of researchers who publish papers on the arXiv platform.Unusual longevityThe objectives of the threat actors are also multifaceted. One is the collection of SSH private keys, Amazon Web Services access keys, command histories, and other sensitive information from infected devices every 12 hours. When this post went live, dozens of machines remained infected, and an online account on Dropbox contained some 390,000 credentials for WordPress websites taken by the attackers, most likely by stealing them from fellow malicious threat actors. The malware used in the campaign also installs cryptomining software that was present on at least 68 machines as of last month.Its unclear who the threat actors are or what their motives may be. Datadog researchers have designated the group MUT-1244, with MUT short for mysterious unattributed threat.The campaign first came to light when Checkmarx recently discovered @0xengine/xmlrpc, a package that had circulated on the NPM JavaScript repository since October 2023. @0xengine/xmlrpc, began as a benign package offering a JavaScript implementation of the widely used XML-RPC protocol and client implementation for Node.js. A screenshot showing the NPM page were @0xengine/rpcxml was available. Credit: Checkmarx Over time, the package slowly and strategically evolved into the malware it is today. A significant change eventually introduced heavily obfuscated code hidden in one of its components. In its first 12 months, @0xengine/xmlrpc received 16 updates, giving developers the impression it was a benign and legitimate code library that could be trusted in sensitive environments.MUT-1244 complemented @0xengine/xmlrpc with a second package available, which was available on GitHub. Titled yawpp and available at hxxps[:]//github[.]com/hpc20235/yawpp, the package presented itself as a tool for WordPress credential checking and content posting. Theres no malicious code in the code, but because the package requires @0xengine/xmlrpc as a dependencysupposedly because it used @0xengine/xmlrpc for XML-RPC communication with WordPress sites, the malicious package was automatically installed.The combination of regular updates, seemingly legitimate functionality, and strategic dependency placement has contributed to the packages unusual longevity in the NPM ecosystem, far exceeding the typical lifespan of malicious packages that are often detected and removed within days, Checkmarx researcher Yehuda Gelb wrote last month. The malicious functionality of the @0xengine/xmlrpc package was made all the more stealthy by remaining dormant until or unless executed through one of two vectors:Direct package users execute any command with the targets or -t flag. This activation occurs when running the packages validator functionality, which masquerades as an XML-RPC parameter validation feature.Users installing the yawpp WordPress tool from GitHub automatically receive the malicious package as a dependency. The malware activates when running either of yawpps main scripts (checker.js or poster.js), as both require the targets parameter for normal operation. The attack flow as shown in a diagram from Checkmarx. Credit: Checkmarx The malware maintained persistencemeaning the ability to run each time the infected machine was rebootedby disguising itself as a legitimate session authentication service named Xsession.auth. Every 12 hours Xsession.auth would initiate a systematic collection of sensitive system including:SSH keys and configurations from ~/.sshCommand history from ~/.bash_historySystem information and configurationsEnvironment variables and user dataNetwork and IP information through ipinfo.ioThe stolen data would then be uploaded to either an account on Dropbox or file.io. Monitoring the wallet where mined Monero cryptocurrency was deposited indicated the malware was running on machines in the real world. Screenshot showing a graph tracking mining activity. Credit: Checkmarx But wait, theres moreOn Friday, Datadog revealed that MUT-1244 employed additional means for installing its second-stage malware. One was through a collection of at least 49 malicious entries posted to GitHub that contained Trojanized proof-of-concept exploits for security vulnerabilities. These packages help malicious and benevolent security personnel better understand the extent of vulnerabilities, including how they can be exploited or patched in real-life environments.A second major vector for spreading @0xengine/xmlrpc was through phishing emails. Datadog discovered MUT-1244 had left a phishing template, accompanied by 2,758 email addresses scraped from arXiv, a site frequented by professional and academic researchers. A phishing email used in the campaign. Credit: Datadog The email, directed to people who develop or research software for high-performance computing, encouraged them to install a CPU microcode update available that would significantly improve performance. Datadog later determined that the emails had been sent from October 5 through October 21. Additional vectors discovered by Datadog. Credit: Datadog Further adding to the impression of legitimacy, several of the malicious packages are automatically included in legitimate sources, such as Feedly Threat Intelligence and Vulnmon. These sites included the malicious packages in proof-of-concept repositories for the vulnerabilities the packages claimed to exploit."This increases their look of legitimacy and the likelihood that someone will run them," Datadog said.The attackers' use of @0xengine/xmlrpc allowed them to steal some 390,000 credentials from infected machines. Datadog has determined the credentials were for use in logging into administrative accounts for websites that run the WordPress content management system.Taken together, the many facets of the campaignits longevity, its precision, the professional quality of the backdoor, and its multiple infection vectorsindicate that MUT-1244 was a skilled and determined threat actor. The group did, however, err by leaving the phishing email template and addresses in a publicly available account.The ultimate motives of the attackers remain unclear. If the goal were to mine cryptocurrency, there would likely be better populations than security personnel to target. And if the objective was targeting researchersas other recently discovered campaigns have doneits unclear why MUT-1244 would also employ cryptocurrency mining, an activity thats often easy to detect.Reports from both Checkmarx and Datadog include indicators people can use to check if they've been targeted.Dan GoodinSenior Security EditorDan GoodinSenior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 9 Comments
    0 Comments ·0 Shares ·100 Views
  • What to Prioritize in Health IT in 2025
    www.informationweek.com
    Heading into 2025, healthcare organizations still face workflow shortages, both on the clinical and IT side. Growth in artificial intelligenceand automation will enable tech leaders to address these workflow shortages at health systems. However, as healthcare IT leaders continue experimenting in generative AI (GenAI), it may not be as much of a top priority as you might think, according to IDCs Worldwide C-Suite Tech Survey, which was conducted in September and October 2024.Interestingly, given all the focus on generative AI, only 25% of healthcare respondents reported implementing AI/GenAI as their organizations top priority for the next 12 months, Lynne A. Dunbrack, group vice president for the public sector at IDC, says in an email interview.The IDC report lists the top three health IT priorities in healthcare as investing in security technologies (36.5%), improving customer-focused digital experiences (36.1%), and advancing digital skills across the organization (33%).Here, InformationWeek offers insights from several industry experts on the top priorities for health IT leaders in 2025. Addressing Data Storage NeedsModernizing their infrastructure in the cloud to manage increasing data volumes should be a priority for health IT leaders, according to the IDC FutureScape: Worldwide Healthcare Industry 2025 Predictions report.Related:Cloud solutions and platforms offer more than just expanded technology capacity, scalability, and access to managed services, the report stated. They also act as a catalyst for data exchange and interoperability, enabling seamless integration of third-party applications and other platforms, creating a more open, dynamic, and innovative ecosystem.Scaling Precision Medicine to a Broader PopulationIn 2025, health IT leaders should expand precision medicine to a wider population, says Brigham Hyde, cofounder and CEO of Atropos Health, which offers a cloud-based analytics platform for converting healthcare data into personalized evidence. Precision medicine uses AI and digital tools to make better target treatments possible. The technology could support drug development and personalized therapies for patients. To scale precision medicine, the healthcare industry must keep data specific and personalized, according to Hyde.Precision medicine traditionally focuses on small, highly specific patient cohorts with unique genetic, environmental, or lifestyle factors, Hyde says via email. Scaling it involves extending this level of personalized care to larger and more diverse populations by leveraging technologies like AI and real-world data."Related:AI delivers the ability to drill down on insights for specific conditions at a granular level, Hyde explains. Once these models produce tailored recommendations, they can scale to address broader populations by combining multiple focused models or synthesizing data from different specialties, he says.Implementing Generative AIHealthcare organizations will move on from simply experimenting with GenAI to carrying out enterprise-wide AI strategies, according to the IDC FutureScape report.Although healthcare GenAI investments are expected to triple in healthcare by 2026, 75% of these healthcare Gen AI initiatives will fail to achieve their expected benefits by 2027 due to issues around trustworthiness of data, disconnected workflows, and end-user resistance, IDC reported.In the meantime, in 2025, health IT leaders will need to prioritize quality assurance and physician trust with GenAI and large language models (LLMs), according to Hyde.We will need to scrutinize applications for their clinical accuracy, transparency, and alignment with ethical standards, Hyde says.In the coming year, health IT leaders will prioritize evaluating the accuracy they get from AI algorithms, according toMichael Gao, CEO and cofounder ofSmarterDx, which builds clinical AI applications to allow hospitals to achieve revenue integrity, such as checking for billing coding errors and revenue leakage.Related:As we see more widespread adoption of AI and especially generative AI in healthcare, health IT leaders are going to be prioritizing not just how to supervise an algorithm to understand what level of accuracy youre getting, but also determining how to even pick what level of accuracy you want in the first place, Gao says in an email interview. For example, you want extremely high accuracy algorithms for clinical care. There are a lot of learnings around that before we can really use algorithms effectively in healthcare.Adopting Ambient AIHyde advises that health tech leaders prioritize ambient AI, which operates in the background using advanced computing and AI to detect and generate insights without a users involvement. The technology can automate tasks, as well as personalize care delivery, he says.By collecting and analyzing real-world data in the background, ambient AI enables more precise and actionable insights for disease management, treatment optimization, and personalized medicine initiatives, Hyde explains.Ambient AI can reduce clinician burnout and improve physician retention through ambient dictation and transcription of notes from patient visits, according to Hyde.Addressing Health Inequities With AITo address health inequities and avoid the biases in AI models, health IT leaders should prioritize vetting AI use cases, says Ann Bilyew, executive vice president for health and president of the Healthcare Solutions Group at WebMD/Internet Brands.Keeping AI equitable means paying attention to the social determinants of health, which are factors that influence health such as income, job, education level, and ZIP code.Although addressing health inequities is a worthwhile and promising goal for AI, its important to note that AI is only as good as the material its trained on, and that material has inherent biases, Bilyew tells us via email. AI can exacerbate those biases, so it is critical that healthcare organizations thoroughly vet these use cases to ensure they meet the intended goal.AI models should apply to patients across the board to remain trustworthy and equitable, suggests Dan Stevens, healthcare and life sciences solutions architect at Lenovo.To gain trust from care providers and patients to accept AI-generated healthcare recommendations, it will be crucial to ensure the data used for training is representative of the general population, maintains patient data confidentiality, and avoids bias, Stevens says via email.Investing in Cybersecurity ToolsIn IDCs Worldwide C-Suite Tech Survey, 46.9% of healthcare respondents cited security concerns as the top challenge their organizations faced when implementing Gen AI.Security and cybersecurity tools are a business imperative to protect vulnerable healthcare infrastructure against increasing volumes of insidious ransomware attacks that put patient safety at risk, Dunbrack says.Meanwhile, by 2027, increasing cybersecurity risks will drive healthcare organizations to use AI-based threat intelligence solutions to enable continuity of care and protect patients, according to the IDC FutureScape report.To safeguard patient safety and ensure uninterrupted healthcare services, it is imperative to make investments in cybersecurity a top priority, the report stated.MiteshRao, founder and CEO ofOMNY Health, notes the security steps healthcare organizations should take in 2025 following the massive healthcare data breaches that occurred in healthcare in 2024, particularly with Change Healthcare.More companies need to implement checks and balances on their own operations to prevent leaks and cyberattacks, Rao says in an email interview. Beyond that, data providers need to vet their data sharing policies to make sure that patients information doesnt end in the wrong hands.As AI models are used more extensively and health data gets spread across diagnostic and financial information as well as multiple types of platforms -- including local devices, mobile, servers and cloud services -- IT leaders will need to prevent risk of security breaches, Lenovos Stevens suggests.If not managed appropriately, AI workflows risk introducing unanticipated security breaches due to a lack of end-to-end protection keeping data secure across all resources, from an individuals PC to the cloud, Stevens says.Tackling Regulatory ComplianceWith the focus on GenAI, healthcare organizations must ensure they understand regulations around compliance in 2025, IDCs FutureScape report noted.For 2025, Atropos Healths Hyde advises that health IT leaders build frameworks that establish trust while adhering to regulatory standards at the same time. These frameworks will depend on the size of the healthcare organization, he says.Larger health systems and technology companies with robust resources may prioritize building their own frameworks tailored to their specific needs, ensuring alignment with their internal workflows, patient populations, and operational goals, Hyde says. However, the majority are expected to rely on or closely align with emerging regulatory frameworks and standards.Prioritize Cyber ResilienceIn 2025, health IT leaders should keep cyber resilience in mind to stay prepared for cybersecurity incidents before they occur, advises Ty Greenhalgh, industry principal of healthcare at cybersecurity firm Claroty. Greenhalgh is also an ambassador for the US Department of Health and Human Services 405(d) Task Force and a member of the HHS Healthcare Sector Council Cyber Working Group.Health IT leaders should rely on the NIST definition of resilience to anticipate, survive, and recover from cybersecurity threats, according to Greenhalgh.By leveraging the NIST definition of resilience, organizations can anticipate, withstand, adapt, and recover from threats, Greenhalgh tells us via email. This approach emphasizes early detection and mitigation to reduce downtime and financial impact, particularly in the face of persistent threats like ransomware.
    0 Comments ·0 Shares ·106 Views
  • Uniting IT, Finance, and Sustainability Through the Integrated Profit and Loss
    www.informationweek.com
    Rick Pastore, Research Principal, SustainableIT.orgDecember 13, 20244 Min ReadPixabayEconomies across the world are making slow recoveries from the COVID-19 setbacks, but are at risk from geopolitical conflict and tensions, trade protectionism, and high debt levels. At the same time, populist politics, nationalism, and sovereigntist movements are gaining traction in countries and regions. These factors make it more challenging for companies to pursue environmental, social and governance (ESG) sustainability programs and invest in the necessary transformation. Even internally, C-suites may not see eye-to-eye, with sustainability, compliance, and HR officers often at odds with finance and procurement, and IT typically on the sidelines or caught in the middle. If only there was a way to show everyone which sustainability investments made sound business.Turns out, there is. First deployed in the early 2010s, the integrated profit and loss (IP&L)statement can bring transparency and clarity to the business impact of sustainability investment. The IP&L is a holistic approach to financial reporting that accounts not only for traditional financial metrics but also for sustainability factors and impacts. The standard P&L focuses on revenues, expenses, and profit; the IP&L adds the companys impact on broader aspects such as natural resources, carbon footprint, social contributions, and governance practices.Related:By quantifying the economic, environmental, and social impacts of business activities, companies can make more informed strategic decisions that integrate profitability with sustainability goals. For instance, an IP&L might reflect the costs associated with carbon emissions or the benefits of social programs, together with the financial reliance on a healthy environment that supports agricultural productivity. This allows business leaders to see how these factors influence the companys overall financial health. It also makes clearer the investments and initiatives that deliver both financial returns and sustainability gains.Food multinational Danone released its first IP&L in 2010. Other companies with public IP&L reports include global health technology company Philips and paint and coatings manufacturer AkzoNobel. Brazilian cosmetics company Natura & Co. adopted the IP&L in 2021 to measure and manage its sustainability impacts. It revealed a net positive societal value primarily driven by social and human capital investment. For every $1 of sales, Natura generated $1.50 in net societal value.Despite these benefits, the IP&L is not widely used, largely due to a deficiency of standardized data. Its this deficiency, in part, that offers the IT organization an opportunity to join finance and sustainability at the strategic table. An IP&L relies on sophisticated data integration and analytics, which places the IT office at the heart of its implementation. IT can develop or adapt systems to collect, process, and analyze data from various sources -- such as energy consumption and emissions, supply chain, employee welfare, and governance compliance. This may involve integrating IoT sensors, harnessing big data and activity-based carbon accounting systems and databases, and applying AI algorithms to monitor sustainability metrics in real-time. IT would also contribute to ensuring data validity and auditability.Related:With a more complete and reliable sustainability data set, the finance office would be able to make data-driven decisions on ESG-related capital allocation, budget forecasting, and performance measurement. Finance and investor relations could also leverage the IP&L to communicate financial and non-financial value creation to investors and other stakeholders, contributing to transparency and trust and reducing risk of greenwashing accusations.For sustainability officers, the IP&L may be the most potent professional tool at their disposal. With it, they can quantify and track the impact of their initiatives on not only sustainability metrics but financial performance. They can identify and promote the programs that contribute most to the companys overall goals and justify sustainability transformation investments based on clear financial and non-financial impacts. It is also a great mechanism to communicate and validate the sustainability teams impact to other departments and the executive suite.Related:Indeed, the IP&L may be the best baton for the sustainability relay race, bringing CFOs and CIOs out on the track to join their CSO colleagues. Together, this trio can effectively assure stakeholders that sustainability investment is a fully vetted, carefully calculated component of business strategy.But their unified impact goes well beyond investment justification. New research is underway that documents opportunities, best practices and impacts of collaboration between finance, IT and the sustainability office, conducted by the Sustainability Value Creation partnership. The five organizations comprising the partnership bring expertise in finance, IT and sustainability to research initiative: Accounting for Sustainability, SustainableIT.org, the ERM Sustainability Institute, software company Salesforce, and global insights and advisory firm GlobeScan. The partnership's goal is to illuminate how companies can best create long-term value by integrating sustainability across their corporate functions. Leaders in IT, finance, and sustainability are invited to take part in a 10-minute online survey. It is open until December 23, with results expected in February 2025.About the AuthorRick PastoreResearch Principal, SustainableIT.orgAs Research Principal at SustainableIT.org, Rick Pastore develops and delivers research-based insights, tools, case histories, and other content tailored for IT leaders driving ESG strategies for their functions, enterprises and industries. He has over 25 years of experience working with CIOs and their teams to apply thought leadership and best practices to maximize business value from information technology.See more from Rick PastoreNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·112 Views
  • Are vast amounts of hydrogen fuel hidden below Earth's surface?
    www.newscientist.com
    Drill rig in Nebraska run by Natural Hydrogen Energy LLC, which established its first hydrogen borehole in 2019Viacheslav ZgonnikFor the past few years, companies and prospectors around the world have been hunting for underground reserves of natural hydrogen, spurred by estimates that Earth contains trillions of tonnes of the gas. If found, this geologic hydrogen could accelerate the transition away from fossil fuels. But despite a few tantalising hints that vast reserves exist, the search has largely come up short.Until recently, most geologists
    0 Comments ·0 Shares ·115 Views
  • Mpox became a global health emergency for the second time in 2024
    www.newscientist.com
    A Red Cross worker spraying chlorine-based disinfectant in Goma in the Democratic Republic of the Congo in August 2024MOISE KASEREKA/EPA-EFE/ShutterstockMpox surged in parts of East, West and Central Africa in 2024, prompting the World Health Organization (WHO) to declare it a public health emergency of international concern in August. This was just over a year after it said an earlier mpox emergency was over, marking the first time the WHO has declared two such alerts consecutively over the same infection.The emergency that ended in 2023 was driven by the clade IIb variant of mpox, formerly known as
    0 Comments ·0 Shares ·109 Views
  • AIs emissions are about to skyrocket even further
    www.technologyreview.com
    Its no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 2,132 data centers operating in the United States (78% of all facilities in the country). These facilitiesessentially buildings filled to the brim with rows of serversare where AI models get trained, and they also get pinged every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. Since 2018, carbon emissions from data centers in the US have tripled. For the 12 months ending August 2024, data centers were responsible for 105 million metric tons of CO2, accounting for 2.18% of national emissions (for comparison, domestic commercial airlines are responsible for about 131 million metric tons). About 4.59% of all the energy used in the US goes toward data centers, a figure thats doubled since 2018. Its difficult to put a number on how much AI in particular, which has been booming since ChatGPT launched in November 2022, is responsible for this surge. Thats because data centers process lots of different types of datain addition to training or pinging AI models, they do everything from hosting websites to storing your photos in the cloud. However, the researchers say, AIs share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology. Its a pretty big surge, says Eric Gimon, a senior fellow at the think tank Energy Innovation, who was not involved in the research. Theres a lot of breathless analysis about how quickly this exponential growth could go. But its still early days for the business in terms of figuring out efficiencies, or different kinds of chips. Notably, the sources for all this power are particularly dirty. Since so many data centers are located in coal-producing regions, like Virginia, the carbon intensity of the energy they use is 48% higher than the national average. The paper, which was published on arXiv and has not yet been peer-reviewed, found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. There are causes other than simply being located in coal country, says Falco Bargagli-Stoffi, an author of the paper. Dirtier energy is available throughout the entire day, he says, and plenty of data centers require that to maintain peak operation 24-7. Renewable energy, like wind or solar, might not be as available. Political or tax incentives, and local pushback, can also affect where data centers get built. One key shift in AI right now means that the fields emissions are soon likely to skyrocket. AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these multimodal models have been stuck in the research phase, but thats changing. OpenAI released its video generation model Sora to the public on December 9, and its website has been so flooded with traffic from people eager to test it out that it is still not functioning properly. Competing models, like Veo from Google and Movie Gen from Meta, have still not been released publicly, but if those companies follow OpenAIs lead as they have in the past, they might be soon. Music generation models from Suno and Udio are growing (despite lawsuits), and Nvidia released its own audio generator last month. Google is working on its Astra project, which will be a video-AI companion that can converse with you about your surroundings in real time. As we scale up to images and video, the data sizes increase exponentially, says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca, who is the papers lead author. Combine that with wider adoption, he says, and emissions will soon jump. One of the goals of the researchers was to build a more reliable way to get snapshots of just how much energy data centers are using. Thats been a more complicated task than you might expect, given that the data is dispersed across a number of sources and agencies. Theyve now built a portal that shows data center emissions across the country. The long-term goal of the data pipeline is to inform future regulatory efforts to curb emissions from data centers, which are predicted to grow enormously in the coming years. Theres going to be increased pressure, between the environmental and sustainability-conscious community and Big Tech, says Francesca Dominici, director of the Harvard Data Science Initiative and another coauthor. But my prediction is that there is not going to be regulation. Not in the next four years.
    0 Comments ·0 Shares ·127 Views