• In IT? Need cash? Cybersecurity whistleblowers are earning big payouts.
    arstechnica.com
    blow that whistle In IT? Need cash? Cybersecurity whistleblowers are earning big payouts. The US government now relies on whistleblowers to bring many cases. Nate Anderson Dec 16, 2024 5:38 pm | 20 Credit: Getty Images | spxChrome Credit: Getty Images | spxChrome Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreMatthew Decker is the former chief information officer for Penn State Universitys Applied Research Laboratory. As of October, he's also $250,000 richer.In his Penn State position, Decker was well placed to see that the university was not implementing all of the cybersecurity controls that were required by its various contracts with NASA and the Department of Defense (DoD). It did not, for instance, use an external cloud services provider that met the DoD's security guidelines, and it fudged some of the self-submitted "scores" it made to the government about Penn State's IT security.So Decker sued the school under the False Claims Act, which lets private individuals bring cases against organizations on behalf of the government if they come across evidence of wrongdoing related to government contracts. In many of these cases, the government later "intervenes" to assist with the case (as it did here), but whether it does so or not, whistleblowers stand to collect a percentage of any fines if they win.In October, Penn State agreed to a $1.25 million settlement with the government; Decker got $250,000 of the money.On the regularThis now happens in IT with some regularity. In November, Dell, Dell Federal Systems, and Iron Bow Technologies settled with the government for $4.3 million over claims that they "violated the False Claims Act by submitting and causing the submission of non-competitive bids to the Army and thereby overcharging the Army under the Army Desktop and Mobile Computing 3 (ADMC-3) contract."But once again, this wasn't something the government uncovered on its own; a whistleblower named Brent Lillard, who was an executive at another company in the industry, brought the initial complaint. For his work, Lillard just made $345,000.In early December, Gen Digital (formerly Symantec) paid a much larger fee$55.1 millionafter losing a trial in 2022. Gen Digital/Symantec was found liable for charging the government higher prices than it charged to companies.Once again, the issue was brought to light by a whistleblower, Lori Morsell, who oversaw the contract for Gen Digital/Symantec. Morsell's award has not yet been determined by the court, but given the amount of the payout, it should be substantial.False Claims Act goes digitalDue to the complexity of investigatingor even finding out abouttechnical failures and False Claims Act cases from the outside of an organization, the government has increasingly relied on whistleblowers to kick-start these sorts of IT cases.The False Claims Act goes back to the Civil War, where it was used on unscrupulous vendors who sold poor-quality goods to the Union army. Today, it has become the tool of choice to prosecute cyber-failures regarding government contractors, largely because of the Act's robust whistleblower rules (technically known as its "qui tam" provisions).This was, even just a few years ago, a novel proposition. In 2020, the law firm Carlton Fields noted that "two significant whistleblower cases sent ripples through the False Claims Act (FCA) community by demonstrating the specter of FCA liability resulting from the failure to comply with cybersecurity requirements in government contracts."In one of these cases, Brian Markus earned $2.61 million for his False Claims Act case against Aerojet Rocketdyne.In the other, James Glenn sued Cisco over a video surveillance product that had known security flaws and yet was sold to numerous government agencies. Cisco eventually paid $8.6 million, of which Glenn walked away with more than $1 million.By 2021, however, False Claims Act cases to go after government contractors, especially in the IT sector, had become downright normal. The Department of Justice even stood up a special program called the Civil Cyber-Fraud Initiative to assist with such cases. In a late 2021 speech, Acting Assistant Attorney General Brian Boynton said that the initiative would use whistleblowers and the False Claims Act to focus on three things:Knowing failures to comply with contractual cyber standardsKnowing misrepresentation of security controls and practicesKnowing failure to report suspected breaches in a timely fashionIn the last four years, the initiative has brought in judgments and settlements against many major companies like Boeing (which paid $8.1 million in 2023; several whistleblowers split $1.5 million), and it has gone after huge universities like Penn State (see above) and Georgia Tech (earlier this year, still tied up in court).Blowing a whistle for yearsThese cases all rely on insiders, and the payouts can be hefty, but the cases can also take years to reach their conclusions. The Cisco case, for instance, lasted eight years before the whistleblower got his money. The Penn State case was relatively speedy by contrasta mere two years from its filing in October 2022 to the university's payout earlier this year.To report fraud against the federal government, contact the Department of Justice here. But be aware that, if you're hoping to collect a share of any future payout, you generally need to retain a lawyer and file a whistleblower case first.Nate AndersonDeputy EditorNate AndersonDeputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 20 Comments
    0 Reacties ·0 aandelen ·72 Views
  • Huge math error corrected in black plastic study; authors say it doesnt matter
    arstechnica.com
    Missed a zero Huge math error corrected in black plastic study; authors say it doesnt matter Correction issued for black plastic study that had people tossing spatulas. Beth Mole Dec 16, 2024 5:23 pm | 87 Close-up view of cooking utensils in container on kitchen counter Credit: Getty | Grace Cary Close-up view of cooking utensils in container on kitchen counter Credit: Getty | Grace Cary Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreEditors of the environmental chemistry journal Chemosphere have posted an eye-catching correction to a study reporting toxic flame retardants from electronics wind up in some household products made of black plastic, including kitchen utensils. The study sparked a flurry of media reports a few weeks ago that urgently implored people to ditch their kitchen spatulas and spoons. Wirecutter even offered a buying guide for what to replace them with.The correction, posted Sunday, will likely take some heat off the beleaguered utensils. The authors made a math error that put the estimated risk from kitchen utensils off by an order of magnitude.Specifically, the authors estimated that if a kitchen utensil contained middling levels of a key toxic flame retardant (BDE-209), the utensil would transfer 34,700 nanograms of the contaminant a day based on regular use while cooking and serving hot food. The authors then compared that estimate to a reference level of BDE-209 considered safe by the Environmental Protection Agency. The EPA's safe level is 7,000 ngper kilogram of body weightper day, and the authors used 60 kg as the adult weight (about 132 pounds) for their estimate. So, the safe EPA limit would be 7,000 multiplied by 60, yielding 420,000 ng per day. That's 12 times more than the estimated exposure of 34,700 ng per day.However, the authors missed a zero and reported the EPA's safe limit as 42,000 ng per day for a 60 kg adult. The error made it seem like the estimated exposure was nearly at the safe limit, even though it was actually less than a tenth of the limit."[W]e miscalculated the reference dose for a 60 kg adult, initially estimating it at 42,000 ng/day instead of the correct value of 420,000 ng/day," the correction reads. "As a result, we revised our statement from 'the calculated daily intake would approach the U.S. BDE-209 reference dose' to 'the calculated daily intake remains an order of magnitude lower than the U.S. BDE-209 reference dose.' We regret this error and have updated it in our manuscript."Unchanged conclusionWhile being off by an order of magnitude seems like a significant error, the authors don't seem to think it changes anything. "This calculation error does not affect the overall conclusion of the paper," the correction reads. The corrected study still ends by saying that the flame retardants "significantly contaminate" the plastic products, which have "high exposure potential."Ars has reached out to the lead author, Megan Liu, but has not received a response. Liu works for the environmental health advocacy group Toxic-Free Future, which led the study.The study highlighted that flame retardants used in plastic electronics may, in some instances, be recycled into household items."Companies continue to use toxic flame retardants in plastic electronics, and that's resulting in unexpected and unnecessary toxic exposures, Liu said in a press release from October. "These cancer-causing chemicals shouldn't be used to begin with, but with recycling, they are entering our environment and our homes in more ways than one. The high levels we found are concerning."BDE-209, aka decabromodiphenyl ether or deca-BDE, was a dominant component of TV and computer housings before it was banned by the European Union in 2006 and some US states in 2007. China only began restricting BDE-209 in 2023. The flame retardant is linked to carcinogenicity, endocrine disruption, neurotoxicity, and reproductive harm.Uncommon contaminantThe presence of such toxic compounds in household items is important for noting the potential hazards in the plastic waste stream. However, in addition to finding levels that were an order of magnitude below safe limits, the study also suggested that the contamination is not very common.The study examined 203 black plastic household products, including 109 kitchen utensils, 36 toys, 30 hair accessories, and 28 food serviceware products. Of those 203 products, only 20 (10 percent) had any bromine-containing compounds at levels that might indicate contamination from bromine-based flame retardants, like BDE-209. Of the 109 kitchen utensils tested, only nine (8 percent) contained concerning bromine levels."[A] minority of black plastic products are contaminated at levels >50 ppm [bromine]," the study states.But that's just bromine compounds. Overall, only 14 of the 203 products contained BDE-209 specifically.The product that contained the highest level of bromine compounds was a disposable sushi tray at 18,600 ppm. Given that heating is a significant contributor to chemical leaching, it's unclear what exposure risk the sushi tray poses. Of the 28 food serviceware products assessed in the study, the sushi tray was only one of two found to contain bromine compounds. The other was a fast food tray that was at the threshold of contamination with 51 ppm.Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 87 Comments
    0 Reacties ·0 aandelen ·72 Views
  • The Cloud You Want Versus the Cloud You Need
    www.informationweek.com
    How do operational needs compare with organizations ambitions when it comes to using the cloud? Do plans for the cloud get ahead of what companies need?
    0 Reacties ·0 aandelen ·94 Views
  • 9 Cloud Service Adoption Trends
    www.informationweek.com
    Lisa Morgan, Freelance WriterDecember 16, 202411 Min ReadDubo via Alamy StockAs the competitive landscape changes and the mix of cloud services available continues to grow, organizations are moving deeper into the cloud to stay competitive. Many are adopting a cloud-first strategy.Organizations are adopting more advanced, integrated cloud strategies that include multi-cloud environments and expanded services such as platform as a service (PaaS) and infrastructure as a service (IaaS), says Bryant Robinson, principal consultant at management consulting firm Sendero Consulting. This shift is driven by increasing demands for flexibility, scalability, and the need to support emerging technologies such as remote collaboration, real-time data processing and AI-powered diagnostics.Recent surges in cyberattacks have also accelerated these changes, highlighting the need for adaptable digital infrastructure to ensure continuity of business processes, enhance user accessibility, and protect sensitive customer data.Companies that are succeeding with cloud adoption are investing in improved security frameworks, focusing on interoperability, and leveraging cloud-native tools to build scalable applications, says Robinson. In addition, certain industries have to prioritize technology with regulation and compliance mechanisms that add a level of complexity. Within healthcare, for example, regulations like HIPAA are [considered] and prioritized through implementing secure data-sharing practices across cloud environments.Related:However, some organizations struggle with managing multi-cloud complexity and the resulting inability to access, share, and seamlessly use data across those environments. Organizations may also lack the in-house expertise needed to implement and operationalize cloud platforms effectively, leading to the inefficient use of resources and potential security risks.Organizations should develop a clear, long-term cloud strategy that aligns with organizational goals, focusing on interoperability, scalability, and security. Prioritize upskilling IT teams to manage cloud environments effectively and invest in disaster recovery and cybersecurity solutions to protect sensitive customer data, says Robinson. Embrace multi-cloud approaches for flexibility, simplifying management with automation and centralized control systems. Finally, select cloud vendors with a strong track record and expertise in supporting compliance within heavily regulated environments.Following are more trends driving cloud service shifts.1. InnovationPreviously, the demand for cloud data services was largely driven by flexibility, convenience and cost, but Emma McGrattan, CTO at Actian, a division of HCL Software, has seen a dramatic shift in how cloud data services are leveraged to accelerate innovation.Related:AI and ML use cases, specifically a desire to deliver on GenAI initiatives, are causing organizations to rethink their traditional approach to data and use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality, and effective governance, says McGrattan. [The] successful companies understand the importance of investing in data preparation, governance, and management to prepare for GenAI-ready data. They also understand that high-quality data is essential, not only for success but also to mitigate the reputational and financial risks associated with inaccurate AI-driven decisions, including the very real danger of automating actions based on AI hallucinations.The advantages of embracing these data trends include accelerated insights, enhanced customer experiences, and significant gains in operational efficiency. However, substantial challenges persist. Data integration across diverse systems remains a complex undertaking, and the scarcity of skilled data professionals presents a significant hurdle. Furthermore, keeping pace with the relentless acceleration of technological advancements demands continuous adaptation and learning. Successfully navigating these challenges requires sound data governance.Related:My advice is to focus on encouraging data literacy across the organization and to foster a culture of data curiosity, says McGrattan. I believe the most successful companies will be staffed with teams fluent in the language of data and empowered to ask questions of the data, explore trends, and uncover insights without encountering complexity or fearing repercussions for challenging the status quo. It is this curiosity that will lead to breakthrough insights and innovation because it pushes people to go beyond surface-level metrics.2. Cloud computing applicationsMost organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. Theyre also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection.Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements, says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. Customers across industries are very cost-conscious. They want technology thats good, safe and secure at a much cheaper rate.Providers are shifting to more now container-based or server-free workloads to control cost because they allow providers to scale up to meet the needs of customer activity while also scaling back when systems are not heavily utilized.You want to unload as many apps as possible to vendors whose main role is to service those apps. That hasnt changed. What has changed is how much theyre willing to spend on moving forward on their digital transformation objectives, says Milewski.3. Artificial intelligence and machine learningTheres a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.The most successful organizations are maintaining disciplined focus on cloud ROI while exploring AI capabilities. Theyre treating AI services as part of their broader cloud fabric rather than isolated initiatives, ensuring that investments align with actual business value rather than just chasing the next shiny object, says Campos. [However,] many organizations are falling into the trap of making strategic cloud provider commitments based on current AI capabilities without fully understanding the long-term implications. Were seeing some get burned by premature all-in strategies, reminiscent of early cloud adoption mistakes. Theres also a tendency to underestimate the importance of maintaining optionality in this rapidly evolving landscape.4. Global collaboration and remote workMore organizations are embracing global collaboration and remote work, and they are facing an unprecedented quantity of data to manage.Companies are recognizing that with the exponential growth of data, the status quo for their IT stack cant accommodate their evolving performance, scalability and budget requirements. Both large enterprises and agile, innovative SMBs are seeking new ways to manage their data, and they understand that cloud services enable the future and accelerate business, says Colby Winegar, CEO at cloud storage company Storj. The companies on the leading edge are trying to incorporate non-traditional architectures and tools to deliver new services at lower cost without compromising on performance, security or ultimately, their customers experience.Some companies are struggling to adapt traditional IT infrastructure to future IT requirements when many of those solutions just cant accommodate burgeoning data growth and sustainability, legal and regulatory requirements. Other companies are facing data lock-ins.5. Business requirementsMost of todays enterprises have adopted hybrid cloud and multi-cloud strategies to avoid vendor lock-in and to optimize their utilization of cloud resources.The need for flexibility, cost control, and improved security are some factors driving this movement. Businesses are realizing various workloads could function better on various platforms, which helps to maximize efficiency and save expenses, says Roy Benesh, chief technology officer and co-founder of eSIMple, an eSIM offering.However, managing cloud costs is a challenge for many companies and some lack the security they need to minimize the potential for data breaches and non-compliance. There are also lingering issues with integrating new cloud services with current IT infrastructure.It is vital to start with a well-defined strategy that involves assessing present requirements and potential expansion. Cost and security management will be aided by the implementation of strong governance and monitoring mechanisms, says Benesh. Meanwhile, staff members can fully exploit cloud technology if training is invested in, resulting in optimization.6. Operational improvementCloud was initially adopted for cost efficiency, though many enterprises learned the hard way that cloud costs need to be constantly monitored and managed. Todays companies are increasingly using cloud for greater agility, innovation, to be closer to customers, ensure business continuity and reduce overall risk.Companies are getting it right when they invest in [a] cloud-native approach including design, deployment and operational processes while automating infrastructure management, enhancing cloud security and using data to drive decisions, says Sanjay Macwan, CIO/CISO at cloud communications company Vonage. These steps make operations more efficient and secure. However, challenges arise when decision-makers underestimate the complexity of managing multiple cloud environments. Why does this matter? Because it often leads to inefficient use of resources, security gaps and spiraling costs that hurt long-term strategic goals.To stay ahead, businesses must remain adaptable and resilient.My advice is to take a cloud-smart approach. This means balancing innovation with a strong governance framework. Invest in solutions for cloud cost optimization and implement comprehensive security measures from the start, says Macwan. This is crucial to staying ahead of security and cost management issues to ensure that your cloud strategy remains sustainable and effective while capturing full innovation agility that the cloud can offer. Train your teams to handle these complex environments, and always prioritize a design that is both secure and resilient.7. Performance, security and costMany organizations have questioned whether their wholesale migrations to cloud were worth it. Common concerns include security, performance and cost which has driven the move to hybrid cloud. Instead of going back to the old way of doing things, they want to take the lessons learned in public cloud and apply them on premises.Performance, security, and cost concerns are driving change. As cloud has become more popular, its also become more expensive. [Workload security] is now a bigger concern than ever, especially with modern speculative execution attacks at the CPU level. Lastly, some applications need to be physically close for latency and/or bandwidth reasons, says Kenny Van Alstyne, CTO at private cloud infrastructure company SoftIron. [M]igrating back to the legacy way of doing on-premises infrastructure will lead to the desire to move back to cloud again. To succeed and be accepted, on-premises must be managed as if it were your own cloud.One reason private cloud is gaining popularity is because organizations can gain the efficiencies of cloud, while maintaining control over cost, performance and security on-prem, assuming they have the prerequisite knowledge and experience to succeed or the help necessary to avoid common pitfalls.8. Specific workload requirementsOrganizations deploying AI at scale are discovering that while traditional cloud infrastructure performs work well for general-purpose compute workloads, it presents challenges for AI operations, such as the unpredictable availability of GPUs, prohibitive at-scale costs, the operational complexity of energy-dense workloads and performance bottlenecks in storage and networking. Complicating matters further, edge inferencing, initially touted as a darling AI deployment model, has been deprioritized by global telecommunications carriers due to 5Gs underwhelming commercial returns.Large language models demand high-performance storage systems capable of sustaining consistent, high-throughput data flows to keep pace with GPU processing speeds. While traditional cloud storage [and] enterprise SAN deployments work well for many use cases, AI training often requires vast sequential bandwidth to manage reduction operations effectively. Storage limitations can bottleneck training times and lead to costly delays, says Brennen Smith, head of infrastructure at cloud computing platform provider RunPod. While building these specialized systems in-house reduces overall [operating expenses], this requires deep internal architectural knowledge and is capital-intensive, further complicated by Nvidias release cadence, which is rendering GPUs outdated before their full depreciation cycle.These dynamics are leading to a different type of hybrid strategy, one thats using resources for what they do best. This includes combining public cloud, AI/ML-specific cloud offerings and on-premises infrastructure.9. Healthcare agilityHealthcare organizations made the same mistake many enterprises did: they started with lifting and shifting infrastructure to the cloud that was essentially recreating their on-premises environment in a cloud setting. While this provided some benefits, particularly around disaster recovery, it failed to unlock the clouds full potential.Today, we're witnessing a more mature approach. Organizations are increasingly understanding that true cloud value comes from embracing cloud-native architectures and principles. This means building new applications as cloud-first and modernizing existing systems to leverage native cloud capabilities rather than just hosting them there, says Nandy Vaisman, CISO and VP of operations at health data integration platform Vim.Given the value of EHRs, healthcare organizations cannot afford to take a lift-and-shift approach to cybersecurity. When they do, it creates potential vulnerabilities.Vaisman recommends the following:Moving beyond simple lift-and-shift to truly embrace cloud-native architecturesInvesting in cloud security expertise and trainingAdapting security practices specifically for cloud environmentsFocusing on privacy-by-design in cloud implementationsLeveraging cloud-native tools for compliance and security monitoringAbout the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Reacties ·0 aandelen ·88 Views
  • Plastic chemicals linked to hundreds of thousands of deaths worldwide
    www.newscientist.com
    Plastic food packaging can expose people to chemicals such as bisphenol A (BPA)Shutterstock/Trong NguyenHundreds of thousands of deaths and millions of cases of heart disease worldwide may be linked to chemicals in common plastic products, suggesting that more stringent regulations on such toxins could benefit public health.Maureen Cropper at the University of Maryland and her colleagues assessed the public health impact of exposure to three types of chemicals primarily used in plastics: bisphenol A (BPA), di(2-ethylhexyl) phthalate (DEHP) and polybrominated diphenyl ethers (PBDEs). BPA and DEHP are found in plastic food packaging and PBDEs are flame retardants used in some household goods, such as furniture and electronics. AdvertisementDrawing on more than 1700 previously published studies, the team estimated peoples exposure to these three classes of chemicals across 38 countries, which represent roughly a third of the worlds population. Three of the countries the US, Canada and South Korea also have public databases that monitor levels of these chemicals in urine and blood samples, providing even more accurate data.In combination with medical records and toxicology reports, the researchers calculated health outcomes attributable to these chemicals. They found that in 2015, about 5.4 million cases of coronary artery disease and 346,000 strokes were associated with BPA exposure and that roughly 164,000 deaths in people between 55 and 64 years old may have been due to DEHP.Thanks to regulationsenacted in the late 2000s, the prevalence of these chemicals has since decreased in many countries such as the US, Canada and those in Europe. The researchers estimate that about 515,000 deaths could have been avoided if BPA and DEHP exposures in the US had been at post-regulation levels since 2003. This underscores the importance of governments and manufacturers limiting the use of toxic chemicals in plastic products before they reach consumers, says Cropper.However, it is important to remember these findings are only approximations. I think one of the real limitations, frankly, is the lack of exposure data on these substances, says Cropper, meaning estimates for some countries may be less accurate than others. It would be a good idea if more countries actually monitored [exposures to] these and other substances, which would improve our understanding of their public health burden, she says.Journal reference:PNAS DOI: 10.1073/pnas.2412714121Topics:
    0 Reacties ·0 aandelen ·69 Views
  • Liquid metal particles can self-assemble into electronics
    www.newscientist.com
    A crosshatch pattern of wires created by self-assembling liquid metal particlesJulia Chang / North Carolina State UniversitySelf-assembling electronics made from liquid metal particles could provide a cheaper way of manufacturing computer chips, simply by harnessing the basic physics of how fluids flow through tiny structures.The cost of entry in manufacturing electronics and building new chip fabrication plants in the US right now, were talking billions of dollars, says Martin Thuo at North Carolina State University. Its not cheap.Thuo and his colleagues first created a mixture of
    0 Reacties ·0 aandelen ·68 Views
  • The Download: AI emissions and Googles big week
    www.technologyreview.com
    AIs emissions are about to skyrocket even further Its no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 78% of all data centers in the country in the US. These facilitiesessentially buildings filled to the brim with rows of serversare where AI models get trained, and they also get pinged every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. Since 2018, carbon emissions from data centers in the US have tripled.Its difficult to put a number on how much AI in particular is responsible for this surge. But AIs share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology. Read the full story. Googles big week was a flex for the power of big tech Google has been speeding toward the holiday by shipping or announcing a flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industryeven if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses.Read more here. This story originally appeared in The Debrief with Mat Honan, our weekly take on whats really going on behind the biggest tech headlines. The story is subscriber-only sonab a subscriptiontoo, if you havent already! Or you cansign upto the newsletter for free to get the next edition in your inbox on Friday. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Mysterious drones have been spotted along the US east coast People are getting a bit freaked out, to say the least. (BBC) Although sometimes theyre just small planes, authorities say.(Wired) Trump says they should be shot down.(Politico) 2 TikTok could be gone from app stores by January 19 Last week, a US appeals court upheld a law forcing Bytedance to divest. (Reuters) The rationale behind the ban could open the door to other regulations that suppress speech.(Atlantic) Influencers are putting together their post-TikTok plans.(Business Insider) The long-shot plan to save TikTok. (Verge) The depressing truth about the coming ban.(MIT Technology Review) 3 Authorities in Serbia are using phone-cracking tools to install spyware Activists and journalists found their phone had been tampered with after a run-in with police. (404 Media) 4 Cellphone videos are fueling violence inside US schools Students are using phones to arrange, provoke and capture brawls in the corridors. (NYT) 5 AI search startup Perplexity says it will generate $10.5 million a month next year Its in talks to raise money at a $9 billion valuation. (The Information) AI search could break the web. (MIT Technology Review) 6 How Musks partnership with Trump could influence science Even if he cant cut as much as hed like, he still stands to make big changes. (Nature) Is deleting the IRS his worst idea yet?(Washington Post) The top cybersecurity agency is bracing for Trump. (Wired) Trumps win is a huge loss for the climate.(MIT Technology Review) 7 AI firms will scour the globe looking for cheap energy Low-cost power is an absolute priority. (Wired) Its an insatiably hungry industry.(Bloomberg) 8 Anthropics Claude is winning the chatbot battle for tech insiders Its not as big as ChatGPT, but it's got a special something that people like. (NYT) A new Character.ai chatbot for teens will no longer talk romance. (Verge) How to trust what a chatbot says.(MIT Technology Review) 9 The reaction to the UnitedHealthcare CEOs murder could prompt a reckoning Healthcares algorithmic decision-making turns us into numbers on a spreadsheets. (Vanity Fair) Luigi Mangione has to mean something. (Atlantic) 10 How Chinas satellite megaprojects are challenging Starlink Between them, Qianfan, Guo Wang and Honghu-3 could have as many satellites.(CNBC) Quote of the day Weve achieved peak data and therell be no more. OpenAIs cofounder and former chief scientist, Ilya Sutskever, tells the NeurIPS conference that the way AI models will be trained will have to change. The big story How to stop a state from sinkingApril 2024 In a 10-month span between 2020 and 2021, southwest Louisiana saw five climate-related disasters, including two destructive hurricanes. As if that wasnt bad enough, more storms are coming, and many areas are not prepared. But some government officials and state engineers are hoping there is an alternative: elevation. The $6.8 billion Southwest Coastal Louisiana Project is betting that raising residences by a few feet, coupled with extensive work to restore coastal boundary lands, will keep Louisianans in their communities. Ultimately, its something of a last-ditch effort to preserve this slice of coastline, even as some locals pick up and move inland and as formal plans for managed retreat become more popular in climate-vulnerable areas across the country and the rest of the world.Read the full story. Xander Peters We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas?Drop me a lineortweet 'em at me.)+ How to make the most of yourjigsaw puzzlestry them on hard mode. +Mr Tickleis a maniac who needs to be stopped.+ Asong about Christmasthat probably many of us can relate to, if were honest. + If the original Home Alone was wince-inducing in terms of injuries, thesequelis even more excruciating.+ The best crispy roast potatoes ever?Ill let you be the judge.
    0 Reacties ·0 aandelen ·90 Views
  • Googles big week was a flex for the power of big tech
    www.technologyreview.com
    Last week, this space was all about OpenAIs 12 days of shipmas. This week, the spotlight is on Google, which has been speeding toward the holiday by shipping or announcing its own flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industryeven if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses. To start, last week Google Introduced Veo, a new video generation model, and Imagen 3, a new version of its image generation model.Then on Monday, Google announced a breakthrough in quantum computing with its Willow chip. The company claims the new machine is capable of a standard benchmark computation in under five minutes that would take one of todays fastest supercomputers 10 septillion (that is, 1025) years. you may recall that MIT Technology Review covered some of the Willow work after researchers posted a paper preprint in August. But this week marked the big media splash. It was a stunning update that had Silicon Valley abuzz. (Seriously, I have never gotten so many quantum computing pitches as in the past few days.) Google followed this on Wednesday with even more gifts: a Gemini 2 release, a Project Astra update, and even more news about forthcoming agents called Mariner, an agent that can browse the web, and Jules, a coding assistant. First: Gemini 2. Its impressive, with a lot of performance updates. But I have frankly grown a little inured by language-model performance updates to the point of apathy. Or at least near-apathy. I want to see them do something. So for me, the cooler update was second on the list: Project Astra, which comes across like an AI from a futuristic movie set. Google first showed a demo of Astra back in May at its developer conference, and it was the talk of the show. But, since demos offer companies chances to show off products at their most polished, it can be hard to tell whats real and whats just staged for the audience. Still, when my colleague Will Douglas Heaven recently got to try it out himself, live and unscripted, it largely lived up to the hype. Although he found it glitchy, he noted that those glitches can be easily corrected. He called the experience stunning and said it could be generative AIs killer app.On top of all this, Will notes that this week Google DeepMind CEO (the companys AI division) Demis Hassabis was in Sweden to receive his Nobel Prize. And what did you do with your week? Making all this even more impressive, the advances represented in Willow, Gemini, Astra, and Veo are ones that just a few years ago many, many people would have said were not possibleor at least not in this timeframe.A popular knock on the tech industry is that it has a tendency to over-promise and under-deliver. The phone in your pocket gives the lie to this. So too do the rides I took in Waymos self-driving cars this week. (Both of which arrived faster than Ubers estimated wait time. And honestly its not been that long since the mere ability to summon an Uber was cool!) And while quantum has a long way to go, the Willow announcement seems like an exceptional advance; if not a tipping point exactly, then at least a real waypoint on a long road. (For what its worth, Im still not totally sold on chatbots. They do offer novel ways of interacting with computers, and have revolutionized information retrieval. But whether they are beneficial for humanityespecially given energy debts, the use of copyrighted material in their training data, their perhaps insurmountable tendency to hallucinate, etc.is debatable, and certainly is being debated. But Im pretty floored by this weeks announcements from Google, as well as OpenAIfull stop.) And for all the necessary and overdue talk about reining in the power of Big Tech, the ability to hit significant new milestones on so many different fronts all at once is something that only a company with the resources of a Google (or Apple or Microsoft or Amazon or Meta or Baidu or whichever other behemoth) can do.All this said, I dont want us to buy more gadgets or spend more time looking at our screens. I dont want us to become more isolated physically, socializing with others only via our electronic devices. I dont want us to fill the air with carbon or our soil with e-waste. I do not think these things should be the price we pay to drive progress forward. Its indisputable that humanity would be better served if more of the tech industry was focused on ending poverty and hunger and disease and war. Yet every once in a while, in the ever-rising tide of hype and nonsense that pumps out of Silicon Valley, epitomized by the AI gold rush of the past couple of years, there are moments that make me sit back in awe and amazement at what people can achieve, and in which I become hopeful about our ability to actually solve our larger problemsif only because we can solve so many other dumber, but incredibly complicated ones. This week was one of those times for me. Now read the rest of The Debrief The News Robotaxi adoptionis hitting a tipping point. But also,GM is shutting down its Cruise robotaxi division. Hereshow to use OpenAIs new video editing toolSora. Blueskyhas an impersonator problem. The AI hype machine iscoming under government scrutiny. The Chat Every week, I talk to one of MIT Technology Reviews journalists to go behind the scenes of a story they are working on. This week, I hit up James ODonnell, who covers AI and hardware, about his story on how the startup defense contractorAnduril is bringing AI to the battlefield. Mat:James, you got a pretty up close look at something most people probably havent even thought about yet, which is how the future of AI-assisted warfare might look. What did you learn on that trip that you think will surprise people? James:Two things stand out. One, I think people would be surprised by the gulf between how technology has developed for the last 15 years for consumers versus the military. For consumers, weve gotten phones, computers, smart TVs and other technologies that generally do a pretty good job of talking to each other and sharing our data, even though theyre made by dozens of different manufacturers. Its called the internet of things. In the military, technology has developed in exactly the opposite way, and its putting them in a crisis. They have stealth aircraft all over the world, but communicating about a drone threat might be done with Powerpoints and a chat service reminiscent of AOL Instant Messenger. The second is just how much the Pentagon is now looking to AI to change all of this. New initiatives have surged in the current AI boom. They are spending on training new AI models to better detect threats, autonomous fighter jets, and intelligence platforms that use AI to find pertinent information. What I saw at Andurils test site in California is also a key piece of that. Using AI to connect to and control lots of different pieces of hardware, like drones and cameras and submarines, from a single platform. The amount being invested in AI is much smaller than for aircraft carriers and jets, but its growing. Mat:I was talking with a different startup defense contractor recently, who was talking to me about the difficulty of getting all these increasingly autonomous devices on the battlefield talking to each other in a coordinated way. Like Anduril, he was making the case that this has to be done at the edge, and that there is too much happening for human decision making to process. Do you think thats true? Why is that? James:So many in the defense space have pointed to the war in Ukraine as a sign that warfare is changing. Drones are cheaper and more capable than they ever were in the wars in the Middle East. Its why the Pentagon is spending $1 billion on the Replicator initiative to fieldthousands of cheap dronesby 2025. Its also looking to field more underwater drones as it plans for scenarios in which China may invade Taiwan. Once you get these systems, though, the problem is having all the devices communicate with one another securely. You need to play Air Traffic Control at the same time that youre pulling in satellite imagery and intelligence information, all in environments where communication links are vulnerable to attacks. Mat:I guess I still have a mental image of a control room somewhere, like you might see inDr. StrangeloveorWar Games(orStar Warsfor that matter) with a handful of humans directing things. Are those days over? James:I think a couple things will change. One, a single person in that control room will be responsible for a lot more than they are now. Rather than running just one camera or drone system manually, theyll command software that does it for them, for lots of different devices. The idea that the defense tech sector is pushing is to take them out of the mundane tasksrotating a camera around to look for threatsand instead put them in the drivers seat for decisions that only humans, not machines, can make. Mat:I know that critics of the industry push back on the idea of AI being empowered to make battlefield decisions, particularly when it comes to life and death, but it seems to me that we are increasingly creeping toward that and it seems perhaps inevitable. Whats your sense? James:This is painting with broad strokes, but I think the debates about military AI fall along similar lines to what we see for autonomous vehicles. You have proponents saying that driving is not a thing humans are particularly good at, and when they make mistakes, it takes lives. Others might agree conceptually, but debate at what point its appropriate to fully adopt fallible self-driving technology in the real world. How much better does it have to be than humans? In the military, the stakes are higher. Theres no question that AI is increasingly being used to sort through and surface information to decision-makers. Its finding patterns in data, translating information, and identifying possible threats. Proponents are outspoken that that will make warfare more precise and reduce casualties. What critics are concerned about is how far across that decision-making pipeline AI is going, and how much there is human oversight. I think where it leaves me is wanting transparency. When AI systems make mistakes, just like when human military commanders make mistakes, I think we deserve to know, and that transparency does not have to compromise national security. It tookyearsfor reporter Azmat Khan to piece together the mistakes made during drone strikes in the Middle East, because agencies were not forthcoming. That obfuscation absolutely cannot be the norm as we enter the age of military AI. Mat:Finally, did you have a chance to hit an In-N-Out burger while you were in California? James:Normally In-N-Out is a requisite stop for me in California, but ahead of my trip I heard lots of good things about the burgers at The Apple Pan in West LA, so I went there. To be honest, the fries were better, but for the burger I have to hand it to In-N-Out. The Recommendation A few weeks ago I suggestedCa7riel and Paco Amorosos appearance on NPR Tiny Desk. At the risk of this space becoming a Tiny Desk stan account, Im back again with another. I was completely floored byDoechiis Tiny Desk appearance last week. Its so full of talent and joy and style and power. I came away completely inspired and have basically had her music on repeat in Spotify ever since. If you are already a fan of her recorded music, you will love her live. If shes new to you, well, youre welcome. Go check it out. Oh, and dont worry: Im not planning to recommendBillie Eilishs new Tiny Desk concertin next weeks newsletter. Mostly because Im doing so now.
    0 Reacties ·0 aandelen ·90 Views
  • As a divorced mom of 2, sharing custody during the holidays is brutal. Not competing with my ex helped me enjoy it more.
    www.businessinsider.com
    The hardest part of divorce was being without my kids, especially during the holidays.I felt overwhelmed with pressure to compensate by making them extra special.Relaxing on what I thought the holidays were supposed to look like has allowed us to start new traditions.I sobbed as I sat surrounded by the remnants of Christmas morning half-eaten cinnamon rolls, discarded wrapping, and little piles of presents my 3 and 6-year-old daughters stacked up before they left to spend the rest of Christmas break with their dad.I was still getting used to sharing custody, and the hardest part was being without them, especially during the holidays.This was my new normalIt felt so wrong, but it was our new normal, thanks to a divorce and custody order specifying that we would only spend every other birthday and major holiday together.I was devastated, my mom guilt was in overdrive, and I felt overwhelmed with pressure to make the holidays better than ever, to compensate for my children's suffering, our lack of time together, and what I perceived as my failure to fix everything.I set unreasonably high standards for myself in the hopes of making every Christmas better than the one before more gifts, extravagant decorations, and fun, memorable experiences. It was exhausting, I never felt good enough, and I was spending money I couldn't afford as a single parent raising two kids in one of the nation's most expensive cities.In my quest to make up for what we'd lost, I'd unwittingly turned half the year from Halloween through their first-quarter birthdays into my own unwinnable marathon of misery.I was setting a poor example for themIt took me a while to understand that our enjoyment of these special days was inversely proportional to the size of my ever-growing to-do list, but once I did, there was no going back. Especially when I realized what a poor example I was setting for my daughters by reinforcing the patriarchal message that women, especially mothers, are responsible for everyone else's joy, even when it means abandoning our own.Moving forward, I decided to change my approach and relax my death grip on what I thought the holidays were supposed to look like. Most importantly, this meant reducing the number of items on my to-do list so I could spend more time just being with my kids and savoring their easy, childlike joy.This may sound simple, but it's just not. The expectation that moms create an abundance of magic is so ubiquitous that we're not often aware of how we surrender to it.I changed how I did thingsSo instead of spending time I didn't have putting up lights I couldn't afford, we packed into the car and drove around listening to cheesy Christmas music while admiring our neighbor's decorations and drinking to-go cups of hot chocolate not the kind you film yourself making from scratch at an Insta-worthy cocoa bar with 10 toppings, but the kind you buy for $3, mix with warm milk, and call it good.Instead of competing with my ex-husband to buy the best gifts, I finally admitted to myself that I would never be able to match his budget and decided that it was in fact a win to let him buy the laptops, smart phones, and sneaks, while I focused on more affordable and traditional gifts like books, music, and pajamas.As I began to prioritize my own needs, I realized that the religious holidays my ex-husband favored were less important to me than nature-based ones like spring equinox and winter solstice, which relieved even more competitive pressure. This was also an important reminder that holidays are just an arbitrary day on the calendar, and we could celebrate anytime.Later, when my daughters were in high school, I gave them cash for birthdays and Christmas instead of spending hours searching for the perfect gifts. They loved being able to buy what they wanted, and I loved saving myself the time, effort, and worry that they wouldn't like my selections.As a single mom of two daughters, the freedom to adapt and reimagine the holidays on our own terms was the gift we needed to truly enjoy them.
    0 Reacties ·0 aandelen ·77 Views
  • RFK Jr.'s key advisor petitioned to revoke approval of the polio vaccine. Photos show the US's last outbreak.
    www.businessinsider.com
    A virus that affects nerves in the spinal cord or brain stem causes polio.A young girl using an abacus in a bed at the Columbia Presbyterian Medical Center in New York, circa 1950. Douglas Grundy/Three Lions/Getty Images Polio mainly affects children under the age of five. Most people only have mild symptoms, but one in 200 cases causes irreversible paralysis. Between 5% and 10% of paralyzed patients die when muscles used for breathing can no longer move, according to the World Health Organization. The US had several polio epidemics in the 20th century, including in 1916 and 1937.A doctor removes special casts to examine the a polio patient's legs in 1916. Bettmann via Getty Images Polio was first identified in 1909, though it had been around for centuries, and the US had a serious outbreak in 1916, which started in New York.At the time, doctors understood very little about the disease, including how to treat and prevent it.An estimated 6,000 people died and 21,000 had resulting paralysis from the 1916 outbreak. There were a series of polio outbreaks in the 1940s and 1950s.Two-month-old Martha Ann Murray is watched over by a nurse in an iron lung in 1952. AP Photo The number of polio cases rose from eight per 100,000 in 1944 to 37 per 100,000 in 1952, according to Yale Medical Magazine. During that period, about 60,000 children were contracting the disease each year.There was an increase in people over the age of 10 getting the virus, too. Treatments for polio included hot wool and physical therapy.Larry Becker draws a hospital floor plan using his right foot in 1955. AP Photo Early on, some doctors would put patients in full-body casts, which could make paralysis permanent.Roosevelt sought relief by taking dips in Georgia's warm springs.During a 1940s polio outbreak, the Hickory Emergency Infantile Paralysis Hospital in North Carolina tried treating patients withboiled wool "hot packs" for the skin and physical therapy. Patients with severe cases lived their entire lives in iron lungs.A nurse oversees a boy with polio in an iron lung in 1955. Kirn Vintage Stock/Corbis/Getty Images Bellows inside the large metal box provided suction to help patients breathe when they could no longer do so on their own. The device was first used to save the life of an eight-year-old patient in 1928. Some famous people contracted polio as children, including Mia Farrow and Alan Alda.John Farrow carries daughter Mia out of the hospital in 1954. Bettmann via Getty Images Farrow, the daughter of director John Farrow and the actress and Tarzan-girl Maureen O'Sullivan, became ill during an LA polio outbreak in the summer of 1954."What I saw will never leave me in the hospitals and in the public wards for contagious diseases," Farrow said in 2000. Franklin D. Roosevelt started the National Foundation for Infantile Paralysis, now known as the March of Dimes, to find a cure for polio.Children with polio meet Basil O'Connor, president of the National Foundation for Infantile Paralysis. Matty Zimmerman/AP Photo Roosevelt founded the organization with his former law partner, Basil O'Connor, to help fund research into a polio vaccine.Roosevelt knew the effects of polio first-hand. He was diagnosed with polio in 1921 at the age of 39 and used a wheelchair, mainly in private, while he was president.Celebrities such as Grace Kelly and Joan Crawford helped promote campaigns for a vaccine.Grace Kelly with Mary Koloski at a March of Dimes event in 1955. API/Gamma-Rapho via Getty Images These campaigns helped raise half of all donations to health charities in the US, PBS reported. Jonas Salk was one of the researchers working on a polio vaccine.Developer of the polio vaccine, Dr. Jonas Salk in a laboratory in Pittsburgh, Pennsylvania in 1954. AP Photo At the University of Pittsburgh School of Medicine, Salk began developing a vaccine in the early 1950s.He grew polioviruses on cultures of monkeys' kidney cells and then used formaldehyde to kill the virus.When he injected the dead virus into live monkeys, it protected them from the disease,according to the Science History Institute.After his vaccine proved successful on monkeys, Salk began testing children.Jonas Salk gives a vaccine to a child in the 1950s. Mondadori via Getty Images Salk first injected children who had already had polio. He noted that their antibody levels rose after vaccination, a promising sign that it helped the body fight the infection. Roosevelt's foundation also backed another potential prevention method, gamma globulin.A line of children and parents wait to be immunized with gamma globulin in 1953. Paul E. Thomson/AP Photo In the early 1950s, over 220,000 children were injected with gamma globulin, proteins in blood plasma that are rich in antibodies. The hope was that the serum would boost kids' immune systems and keep them from contracting polio.After looking at the data, though, a committee of epidemiologists and other experts concluded that gamma globulin wasn't effective. The gamma globulin trials helped pave the way for similar trials with Salk's vaccine.Salk gives a shot of the polio vaccine to a girl during test trials in 1954. Bettmann/Getty Images In 1954, the National Foundation for Infantile Paralysis sponsored a trial to test Salk's vaccine.Nearly 2 million children between 6 and 9 years old participated. They were called "Polio Pioneers."Participants were divided into three groups: the first group received the vaccine, the second received a placebo, and the third received neither the vaccine nor a placebo.The following year, in 1955, the vaccine was declared 90% effective against Types 2 and 3 poliovirus. It was 60% to 70% effective against Type 1. Nearly 2 million children participated in the trials, and the vaccine was found to be 90% effective.Jonas Salk arrives in Pittsburgh with his family in 1955. AP Photo At a press conference, Thomas Francis Jr., director of the Poliomyelitis Vaccine Evaluation Center at the University of Michigan, called the vaccine "Safe, effective, and potent."It was headline news. "The story has blanketed the front pages of all the papers I have seen along a 1,600-mile route from New York to Saint Louis, to Memphis and Dallas," Alistair Cooke reported for The Guardian at the time.Though he had some detractors, Salk won many Americans' trust.Joans Salk's polio vaccine in 1955. AP Photo It took mere hours for Salk's vaccine to be licensed for use after the announcement of its efficacy. Vaccine distribution began almost immediately.Polio vaccines are shipped to Europe in 1955. Universal History Archive/Universal Images Group/Getty Images NFIP had already funded facilities that could start producing the vaccine right away. The US sent some vaccines to Europe, and some countries started up their own productions.Children would receive a series of shots to complete the vaccination process.Eight-year-old Ann Hill gets the polio vaccine days after Salk's announcement that it's effective. AP Photo Children needed three shots, each costing between $3 and $5 (around $35-$59 today), according to The Conversation. Shortly after the vaccine program began, a tragic incident caused several deaths.Vaccines are prepared to be distributed around the West Coast from Cutter Laboratories in April 1955. Ernest K. Bennett/AP Photo One of the facilities manufacturing the vaccine, Cutter Laboratories, had kept the live polio virus in hundreds of thousands of doses.In April 1955, over 400,000 children received the improperly prepared vaccines. The mistake led to 260 cases of polio-based paralysis and several deaths, according to the National Institutes of Health. Despite the incident at Cutter Laboratories, hundreds of thousands of children were vaccinated in 1955.Son of the US Surgeon General Leonard McCormick "Bobo" Scheele receives the polio vaccine in May 1955. Byron Rollins/AP Photo The incident at Cutter Laboratories panicked many parents and the vaccine was pulled from the market on April 27, 1955.However, after a massive effort to recheck all the vaccines confirmed they were safe to use, immunization resumed on May 15, 1955. Worldwide, hundreds of thousands of children received the vaccine. Elvis got the vaccine backstage during the "Ed Sullivan Show."Oct. 28, 1956: Elvis Presley receives the polio vaccine in New York City. AP Photo If people were hesitant to have their children vaccinated, Elvis may have helped persuade them. After he got the jab in the fall of 1956, many followed suit.Within six months, 80% of America's youngest generation were vaccinated, Scientific American reported in 2021. Other celebrities, including Louis Armstrong and Ella Fitzgerald, also promoted the vaccine to help inform people of all races and genders.A nurse prepares elementaary school children for a polio vaccine shot. Bettmann / Contributor / Getty Images Roosevelt's foundation was heavily involved in promoting the vaccine and recruited celebrities like Louis Armstrong and Ella Fitzgerald for their campaigns."There was a very early recognition that you couldn't just have white people talking about the vaccine," Stacey D. Stewart, former president and CEO of the March of Dimes, told NPR in 2021. The polio vaccine quickly began protecting people against the virus.Edward Scheffler with his mother after traveling by railroad in 1957. AP Photo Between 1953 and 1957, cases in the US dropped from 35,000 to 5,300 a year, according to the BBC. Meanwhile, Salk's rival, Albert Sabin, was still working on his own polio vaccine.Mark Stacey is visited by Albert Sabin (right) and Dr. Walter Langsam (left) in 1959. Gene Smith/AP Photo Sabin disagreed with Salk's method of using a vaccine with a killed virus. Instead, he preferred a live, yet weakened form that could be taken by mouth instead of injected.Once Sabin showed his version was effective using a trial in the Soviet Union, it was approved for use in the US in 1961.Because Sabin's vaccine was inexpensive and easy to administer many countries began using the oral method. In fact, the song "A Spoonful of Sugar (Helps the Medicine Go Down)" in the 1964 film "Mary Poppins" was inspired by Sabin's polio vaccine. A combination of the two vaccines helped nearly eradicate polio worldwide.An emergency polio ward in Boston, Massachusetts in 1955. AP Photo During the 2010s, the world eradicated polio Types 2 and 3. Only Type 1 remains. The World Health Organization hopes to wipe out the final strain by 2026, but that goal is impossible without polio vaccines.This story was originally published on August 13, 2022, and updated on December 16, 2024. Jake Johnson contributed to a previous version of this post.
    0 Reacties ·0 aandelen ·79 Views