• QXO Hires an AI Chief to Help Sell Items Like Pipes and Lumber
    www.wsj.com
    The latest venture of serial entrepreneur Brad Jacobs taps Ashwin Rao to lead its AI plan targeting the building materials distribution sector.
    0 Comments ·0 Shares ·147 Views
  • Whiskey Lovers, Would You Pay $31 for This Manhattan?
    www.wsj.com
    The Manhattan is a famously forgiving drink. Yet a splurge here or therea premium whiskey, better bitterscan prove surprisingly worthwhile.
    0 Comments ·0 Shares ·148 Views
  • Heretic Review: Hugh Grants Sly, Sinister Gamesmanship
    www.wsj.com
    The charming actor continues his recent run of shady characters with this devilish psychological thriller about a man who invites two Mormon missionaries into his home.
    0 Comments ·0 Shares ·141 Views
  • After decades, FDA finally moves to pull ineffective decongestant off shelves
    arstechnica.com
    Out After decades, FDA finally moves to pull ineffective decongestant off shelves Last year, FDA advisors unanimously voted that oral phenylephrine is ineffective. Beth Mole Nov 7, 2024 6:27 pm | 52 A box of Sudafed PE sinus pressure and pain medicine containing phenylephrine is displayed for sale in a CVS Pharmacy store in Hawthorne, California, on September 12, 2023. Credit: Getty | PATRICK T. FALLON A box of Sudafed PE sinus pressure and pain medicine containing phenylephrine is displayed for sale in a CVS Pharmacy store in Hawthorne, California, on September 12, 2023. Credit: Getty | PATRICK T. FALLON Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn a long-sought move, the Food and Drug Administration on Thursday formally began the process of abandoning oral doses of a common over-the-counter decongestant, which the agency concluded last year is not effective at relieving stuffy noses.Specifically, the FDA issued a proposed order to remove oral phenylephrine from the list of drugs that drugmakers can include in over-the-counter productsalso known as the OTC monograph. Once removed, drug makers will no longer be able to include phenylephrine in products for the temporary relief nasal congestion."It is the FDAs role to ensure that drugs are safe and effective," Patrizia Cavazzoni, director of the FDAs Center for Drug Evaluation and Research, said in a statement. "Based on our review of available data and consistent with the advice of the advisory committee, we are taking this next step in the process to propose removing oral phenylephrine because it is not effective as a nasal decongestant."For now, the order is just a proposal. The FDA will open up a public comment period, and if no comments can sway the FDA's previous conclusion that the drug is useless, the agency will make the order final. Drugmakers will get a grace period to reformulate their products.Reviewed reviewsThe slow-moving abandonment of phenylephrine is years in the making. The decongestant was originally approved by the FDA back in 1976, but it came to prominence after 2006. That was the year when the "Combat Methamphetamine Epidemic Act of 2005" came into effect, and pseudoephedrinethe main component of Sudafedmoved behind the pharmacy counter to keep it from being used to make methamphetamine. With pseudoephedrine out of easy reach at drugstores, phenylephrine became the leading over-the-counter decongestant. And researchers had questions.In 2007, an FDA panel reevaluated the drug, which allegedly works by shrinking blood vessels in the nasal passage, opening up the airway. While the panel upheld the drug's approval, it concluded that more studies were needed for a full assessment. After that, three large, carefully designed studies were conductedtwo by Merck for the treatment of seasonal allergies and one by Johnson & Johnson for the treatment of the common cold. All three found no significant difference between phenylephrine and a placebo.Last year, the FDA reevaluated the drug again, taking into consideration the new studies and taking a deeper look at the 14 studies from the 1950s to 1970s that earned phenylephrine its initial approval. The FDA noted that those 14 studies assessed congestion using a dubious measure of nasal airway resistance that has since been abandoned. But even with the shoddy measurement, the studies provided mixed efficacy results. And the overall finding of efficacy hinged on only two of the studies, which were conducted at the same lab.Too good to be realNo other lab was ever able to replicate the positive results from those two studies. And when FDA scientists carefully looked through the data, they found evidence that some of the numbers could have been fudged and the results were "too good to be real."As a final nail in phenylephrine's coffin, modern studies suggest that when phenylephrine is taken orally, it's highly metabolized in the gut, leaving less than 1 percent of the consumed dose as active in the body. The finding explains why oral doses don't cause the constriction of blood vessels throughout the body that could lead to an uptick in blood pressurea side effect sometimes seen with pseudoephedrine. While researchers initially thought the lack of blood pressure increases was a positive finding, in retrospect, it was a hint that the drug wasn't working.With that, a panel voted unanimously, 16 to 0, that oral doses of phenylephrine are not effective at treating a stuffy nose. Afterward, CVS announced that it would remove products that had phenylephrine as the sole active ingredient.Despite the seemingly damning evidence, the industry group representing makers of phenylephrine-containing products the Consumer Healthcare Products Association (CHPA)still disputed the FDA's move.CHPA is disappointed in FDAs proposal to reverse its long-established view of oral PE [phenylephrine]," CHPA CEO Scott Melville said in a statement Thursday. CHPA maintains its position on the drug's efficacy. "As science and methods advance, new data should be considered in the context of the full weight of available evidence, not as a complete replacement of the previous body of evidenceespecially when considering an ingredient as safely and widely used as PE. CHPA will review the Proposed Order and submit comments accordingly," Melville said.Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 52 Comments Prev story
    0 Comments ·0 Shares ·128 Views
  • Law enforcement operation takes down 22,000 malicious IP addresses worldwide
    arstechnica.com
    TAKEDOWN Law enforcement operation takes down 22,000 malicious IP addresses worldwide Operation Synergia II took aim at phishing, ransomware, and information stealing. Dan Goodin Nov 7, 2024 6:12 pm | 5 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAn international coalition of police agencies has taken a major whack at criminals accused of running a host of online scams, including phishing, the stealing of account credentials and other sensitive data, and the spreading of ransomware, Interpol said recently.The operation, which ran from the beginning of April through the end of August, resulted in the arrest of 41 people and the takedown of 1,037 servers and other infrastructure running on 22,000 IP addresses. Synergia II, as the operation was named, was the work of multiple law enforcement agencies across the world, as well as three cybersecurity organizations.A global responseThe global nature of cybercrime requires a global response which is evident by the support member countries provided to Operation Synergia II, Neal Jetton, director of the Cybercrime Directorate at INTERPOL, said. Together, weve not only dismantled malicious infrastructure but also prevented hundreds of thousands of potential victims from falling prey to cybercrime. INTERPOL is proud to bring together a diverse team of member countries to fight this ever-evolving threat and make our world a safer place.Among the highlights of Operation Synergia II were:Hong Kong (China): Police supported the operation by taking offline more than 1,037 servers linked to malicious services.Mongolia: Investigations included 21 house searches, the seizure of a server and the identification of 93 individuals with links to illegal cyber activities.Macau (China): Police took 291 servers offline.Madagascar: Authorities identified 11 individuals with links to malicious servers and seized 11 electronic devices for further investigation.Estonia: Police seized more than 80GB of server data, and authorities are now working with INTERPOL to conduct further analysis of data linked to phishing and banking malware.The three private cybersecurity organizations that were part of Operation Synergia II were Group-IB, Kaspersky, and Team Cymru. All three used the telemetry intelligence in their possession to identify malicious servers and made it available to participating law enforcement agencies. The law enforcement agencies conducted investigations that resulted in house searches, the disruption of malicious cyber activities, the lawful seizures of servers and other electronic devices, and arrests.The three private security organizations helped identify 30,000 potentially malicious IP addresses. Follow-on investigations later concluded that roughly 76 percent of them were malicious, amounting to about 22,800. Authorities also seized 59 servers and 43 electronic devices, including laptops, mobile phones, and hard disks. The operation led to the arrest of 41 individuals, with 65 others still under investigation.INTERPOL said Operation Synergia II is a response to the escalating threat and professionalization of transnational cybercrime. The three types of cybercrime prioritized were phishing, infosealers, and ransomware.The agency said the advent of generative AI is giving phishers a leg up by allowing them to create more sophisticated emails that are translated into multiple languages. INTERPOL said that there was a 40-percent increase in 2023 in the sale of logs collected from infostealers on the deep and dark web. Officials also noted an average 70 percent increase in ransomware attacks globally.Group-IB and Team Cymru have statements here and here documenting their participation.Dan GoodinSenior Security EditorDan GoodinSenior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at @dangoodin on Mastodon. Contact him on Signal at DanArs.82. 5 Comments
    0 Comments ·0 Shares ·124 Views
  • ThreatLocker CEO Talks Supply Chain Risk, AIs Cybersecurity Role, and Fear
    www.informationweek.com
    Shane Snider, Senior Writer, InformationWeekNovember 7, 20246 Min ReadPictured: ThreatLocker CEO Danny Jenkins.Image provided by ThreatLockerIts no secret that cybersecurity concerns are growing. This past year has seen massive breaches, such as the breach of National Public Data (with 2.7 billion records stolen), and several large breaches of Snowflake customers such as Ticketmaster, Advance Auto Parts and AT&T. More than 165 companies were impacted by the Snowflake-linked breaches alone, according to a Mandiant investigation.According to CheckPoint research, global cyber-attacks increased by 30% in the second quarter of 2024, to 1,636 weekly attacks per organization. An IBM report says the average cost of a data breach globally rose 10% in 2024, to $4.8 million.So, its probably not that surprising that Orlando, Fla.-based cybersecurity firm ThreatLocker has ballooned to 450 employees since its 2017 launch. InformationWeek caught up with ThreatLocker CEO Danny Jenkins at the Gartner IT Symposium/XPO in Orlando last month.(Editors note: The following interview is edited for clarity and brevity.)Can you give us a little overview on what you were talking about at the event?What were talking about is that when youre installing software on your computer, that software has access to everything you have access to, and people often dont realize if they download that game, and there was a back door in that game, if there was some vulnerability from that game, it could potentially steal my files, grant someone access to my computer, grab the internet and send data. So, what we were really talking about was the supply chain risk. The biggest thing is vulnerabilities: The things a vendor didnt intend to do, but accidentally granted someone access to your data. You can really enhance your security through sensible controls and limiting access to those applications rather than trying to find every bad thing in the world.Related:AI has been the major reoccurring theme throughout the symposium. Can you talk a little about the way we approach these threats and how that is going to change as more businesses adopt emerging technologies like GenAI?Whats interesting is that were actually doing a session on how to create successful malware, and were going to talk about how were able to use AI to create undetectable malware versus the old way. If you think about AI, and you think about two years ago, if you wanted to create malware, there were a limited number of people in the world that could do that -- youd have to be a developer, youd have to have some experience, youd have to be smart enough to avoid protections. That pool of people was quite small. Today, you can just ask ChatGPT to create a program to do whatever you want, and it will spit out the code instantly. The amount of people that have the ability to create malware has now drastically increased the way to defend against that is to change the way you think about security. The way most companies think about security now is theyre looking for threats in their environment -- but thats not effective. The better way of approaching security is really to say, "Im just going to block what I dont need, and I dont care if its good and I dont care if its bad. If its not needed in my business, Im going to block it from happening."Related:As someone working in security, is the pace of AI adoption in enterprise a concern?I think the concern is the pace and the fear. AI has been around for a long time. What were seeing the last two years is generative AI and thats whats scaring people. If you think about self-driving cars, you think about the ability of machine learning, the ability to see data and manipulate and learn from that data. Whats scary is that the consumer is now seeing AI that produces and before it was always stuff in the background that you never really thought about. You never really thought about how your car is able to determine if somethings a trash can or if its a person. Now this thing can draw pictures and it can write documents better than I do, and create code. Am I worried about AI taking over the world from that perspective? No. But I am concerned about the tool set that weve now given people who may not be ethical.Related:Before, if you were smart enough to write successful malware, at least in the Western Hemisphere, youre smart enough to get a job and youre not going to risk going to jail. The people who were creating successful malware before, or successful cyber-attacks, were people in countries where there were not opportunities, like Russia. Now, you dont need to be smart enough to create successful cyber-attacks, and thats what concerns me. If you give someone who doesnt have capacity to earn a living access to tools that can allow them to steal data, the path they are going to follow is cyber crime. Just like other crime, when the economy is down and people dont have job, people steal and crime goes up. Cyber crime before was limited to people who had an understanding of technology. Now, the whole world will have access and thats what scares me -- and GenAI has facilitated that.How do you see your business changing in the next 5-10 years because of AI adoption?Ultimately, it changes the way people think about security, to where they have to start adopting more zero-trust approaches and more restrictive controls in their environment. Thats how it has to go -- there is no alternative. Before, there was a 10% chance you were going to get damaged by an attack, now its an 80% chance.If youre the CIO of an enterprise, how should you be looking at building out these new technologies and building on these new platforms? How should you be thinking about the security side of it?At the end of the day, you have to consider the internal politics of the business. And weve gone from a world where IT people and CIOs, who often come from introverted backgrounds where they dont communicate with boards, were seen as the people that make our computers work, and not the people who protect our business now the board is saying we have to bring a security department. I feel like if youre the CIO, you should be leading the conversation with your security team as a CIO, you should be driving that.What was one of your biggest takeaways from the event overall?I think the biggest thing Im seeing in the industry is fear is increasing, and rightly so. Were seeing more people willing to say, "I need to solve my problem. I know were sitting ducks right now." Thats because were on the technology side and we live and breathe this stuff. But what we dont necessarily always understand is what the customer perspective and customer viewpoint is and how do we solve their problems.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·159 Views
  • GenAIs Impact on Cybersecurity
    www.informationweek.com
    Generative AI adoption is becoming ubiquitous as more software developers include the capability in their applications and users flock to sites like OpenAI to boost productivity. Meanwhile, threat actors are using the technology to accelerate the number and frequency of attacks.GenAI is revolutionizing both offense and defense in cybersecurity. On the positive side, it enhances threat detection, anomaly analysis and automation of security tasks. However, it also poses risks, as attackers are now using GenAI to craft more sophisticated and targeted attacks [such as] AI-generated phishing, says Timothy Bates, AI, cybersecurity, blockchain & XR professor of practice at University of Michigan and former Lenovo CTO. If your company hasnt updated its security policies to include GenAI, its time to act.According to James Arlen, CISO at data and AI platform company Aiven, GenAIs impact is proportional to its usage.If a bad actor uses GenAI, you'll get bad results for you. If a good actor uses GenAI wisely you'll get good results. And then there is the giant middle ground of bad actors just doing dumb things [like] poisoning the well and nominally good actors with the best of intentions doing unwise things, says Arlen. I think the net result is just acceleration. The direction hasn't changed, it's still an arms race, but now it's an arms race with a turbo button.Related:The Threat Is Real and GrowingGenAI is both a blessing and a curse when it comes to cybersecurity.On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats, says Erik Avakian, technical counselor at Info-Tech Research Group. These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes.The incorporation of voice impersonation and personalized content through deepfake attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against, says Avakian. GenAI can and is alsobeing used by adversaries to create advanced malware that adapts to defenses and evades current detection systems.Related:Pillar Securitys recent State of Attacks on GenAI report contains some sobering statistics about GenAIs impact on cybersecurity:90% of successful attacks resulted in sensitive data leakage.20% of jail break attack attempts successfully bypassed GenAI application guardrails.Adversaries require an average of just 42 seconds to execute an attack.Attackers needed only five interactions, on average, to complete a successful attack using GenAI applications.The attacks exploit vulnerabilities at every stage of interaction with GenAI systems, underscoring the need for comprehensive security measures. In addition, the attacks analyzed as part of Pillar Securitys research reveal an increase in both the frequency and complexity of prompt injection attacks, with users employing more sophisticated techniques and making persistent attempts to bypass safeguards.My biggest concern is the weaponization of GenAI -- cybercriminals using AI to automate attacks, create fake identities or exploit zero-day vulnerabilities faster than ever before. The rise of AI-driven attacks means that attack surfaces are constantly evolving, making traditional defenses less effective, says University of Michigans Bates. To mitigate these risks, were focusing on AI-driven security solutions that can respond just as rapidly to emerging threats. This includes leveraging behavioral analytics, AI-powered firewalls, and machine learning algorithms that can predict potential breaches.Related:In the case of deepfakes, Josh Bartolomie, VP of global threat services at email threat and defense solution provider Cofense recommends an out-of-band communication method to confirm the potentially fraudulent request, utilizing internal messaging services such as Slack, WhatsApp, or Microsoft Teams, or even establishing specific code words for specific types of requests or per executive leader.And data usage should be governed.With the increasing use of GenAI, employees may look to leverage this technology to make their job easier and faster. However, in doing so, they can be disclosing corporate information to third party sources, including such things as source code, financial information, customer details [and] product insight, says Bartolomie. The risk of this type of data being disclosed to third party AI services is high, as the totality of how the data is used can lead to a much broader data disclosure that could negatively impact that organization and their products [and] services.Casey Corcoran, field chief information security officer at cybersecurity services company Stratascale -- an SHI company, says in addition to phishing campaigns and deep fakes, bad actors are using models that are trained to take advantage of weaknesses in biometric systems and clone persona biometrics that will bypass technical biometric controls.[M]y two biggest fears are: 1) that rapidly evolving attacks will overwhelm traditional controls and overpower the ability of humans to distinguish between true and false; and 2) breaking the need to know and overall confidentiality and integrity of data through unmanaged data governance in GenAI use within organizations, including data and model poisoning, says Corcoran.Tal Zamir, CTO at advanced email and workspace security solutions provider Perception Point warns that attackers exploit vulnerabilities in GenAI-powered applications like chatbots, introducing new risks, including prompt injections. They also use the popularity of GenAI apps to spread malicious software, such as creating fake GenAI-themed Chrome extensions that steal data.Attackers leverage GenAI to automate tasks like building phishing pages and crafting hyper-targeted social engineering messages, increasing the scale and sophistication of attacks, says Zamir. Organizations should educate employees about the risks of sharing sensitive information with GenAI tools, as many services are in early stages and may not follow stringent security practices. Some services utilize user inputs to train models, risking data exposure. Employees should be mindful of legal and accuracy issues with AI-generated content, and always review it before sharing, as it could embed sensitive information.Bad actors can also use GenAI to identify zero days and create exploits. Similarly, defenders can also find zero days and create patches, but time is the enemy: hackers are not encumbered by rules that businesses must follow.[T]here will likely still be a big delay in applying patches in a lot of places. Some might even require physically replacing devices, says Johan Edholm, co-founder, information security officer and security engineer at external attack surface management platformprovider Detectify. In those cases, it might be quicker to temporarily add things between the vulnerable system and the attacker, like a WAF, firewall, air gapping, or similar, but this won't mitigate or solve the risk, only reduce it temporarily. "Make Sure Company Policies Address GenAIAccording to Info-Tech Research Groups Avakian, sound risk management starts with general and AI specific governance practices that implement AI policies.Even if our organizations have not yet incorporated GenAI technologies or solutions yet into the environment, it is likely that our own employees have experimented with it or are using AI applications or components of it outside the workplace, says Avakian. As CISOs, we need to be proactive and take a multi-faceted approach to implementing policies that account for our end-user acceptable use policies as well as incorporating AI reviews into our risk assessment processes that we already have in place. Our security policies should also evolve to reflect the capabilities and risks associated with GenAI if we don't have such inclusions in place already.Those policies should span the breadth of things in GenAI usage, ranging from AI training that covers data protection to monitoring to securing new and existing AI architectural deployments. Its also important that security, the workforce, privacy teams and legal teams understand AI concepts, including the architecture, privacy and compliance aspects so they can fully vet a solution containing AI components or features that the business would like to implement.Implementing these checks into a review process ensures that any solutions introduced into the environment will have been vetted properly and approved for use and any risks addressed prior to implementation and use, vastly reducing risk exposure or unintended consequences, says Avakian. Such reviews should incorporate policy compliance, access control reviews, application security, monitoring and associated policies for our AI models and systems to ensure that only authorized personnel can access, modify or deploy them into the environment. Working with our legal teams and privacy officers can help ensure any privacy and legal compliance issues have been fully vetted to ensure data privacy and ethical use.What if your companys policies have not been updated yet? Thomas Scanlon, principal researcher at Carnegie Mellon University's Software Engineering Institute recommends reviewing exemplar policies created by professional societies to which they belong or consulting firms with multiple clients.The biggest fear for GenAIs impact on cybersecurity is that well-meaning people will be using GenAI to improve their work quality and unknowingly open an attack vector for adversaries, says Scanlon. Defending against known attack types for GenAI is much more straightforward than defending against accidental insider threats.Technology spend and risk managementplatform Flexera established a GenAI policy early on, but it became obvious that the policy was quickly becoming obsolete.GenAI creates a lot of nuanced complexity that requires fresh approaches for cybersecurity, says Conal Gallagher, CISO & CIO of Flexera. A policy needs to address whether the organization allows or blocks it. If allowed, under what conditions? A GenAI policy must consider data leakage, model inversion attacks, API security, unintended sensitive data exposure, data poisoning, etc. It also needs to be mindful of privacy, ethical, and copyright concerns.To address GenAI as part of comprehensive risk management, Flexera formed an internal AI Council to help navigate the rapidly evolving threat landscape.Focusing efforts there will be far more meaningful than any written policy. The primary goal of the AI Council is to ensure that AI technologies are used in a way that aligns with the companys values, regulatory requirements, ethical standards and strategic objectives, says Gallagher. The AI Council is comprised of key stakeholders and subject matter experts within the company. This group is responsible for overseeing the development, deployment and internal use of GenAI systems.Bottom LineGenAI must be contemplated from end user, corporate risk and attacker perspectives. It also requires organizations to update policies to include GenAI if they havent done so already.The risks are generally two-fold: intentional attacks and inadvertent employee mistakes, both of which can have dire consequences for unprepared organizations. If internal policies have not been reviewed with GenAI specifically in mind and updated as necessary, organizations open the door to attacks that could have been avoided or mitigated.
    0 Comments ·0 Shares ·158 Views
  • Why hairy animals shake themselves dry
    www.newscientist.com
    Hairy animals including mice and dogs shake themselves dryatikinka2/Getty ImagesIf you have ever been close to a dog after it has gone for a swim, you have probably been sprayed with water flinging from its fur. We now know the brain pathway that causes animals to rapidly wiggle themselves dry a phenomenon known as the wet dog shake.At least 12 different types of nerve cells help hairy mammals like mice and dogs feel physical sensations, such as temperature changes or touch. Yet it wasnt clear which of these neurons sense irritating substances that animals want to shake
    0 Comments ·0 Shares ·138 Views
  • Slick trick separates oil and water with 99.9 per cent purity
    www.newscientist.com
    Oil and water are difficult to separate without leaving some impuritiesAbaca Press/AlamyMixtures of oil and water can be efficiently separated by pumping them into thin channels between semipermeable membranes, paving the way to cheaper and cleaner ways to deal with industrial waste. Experimental prototypes managed to recover both oil and water with a purity greater than 99.9 per cent.Various methods already exist to split such mixtures into component parts, including spinning them in a centrifuge, mechanically skimming oil from the surface and splitting them with chemicals, electrical charges or semipermeable membranes, which allow some substances through, but not others. Membranes are the simplest method, but are currently imperfect, leaving behind a stubborn mix of oily water or watery oil. AdvertisementNow, Hao-Cheng Yang at Zhejiang University in China and his colleagues have developed a more efficient method that uses two membranes one hydrophobic layer that allows oil to pass, and one hydrophilic layer that allows water to pass in order to cleanly separate both.Yang says the idea has been tried before with less-than-impressive results. This is because as oil or water is removed from the mixture, the concentration of the components changes, making the membranes less efficient.To overcome this, the team pumped the mixture into a thin channel between the two layers. In this confined space, droplets of oil are more likely to collide and accumulate, which means they can then be removed more efficiently by the hydrophobic membrane. This, in turn, increases the ratio of water in the mixture, creating a beneficial feedback loop that ensures both clean oil and water are removed continually. The latest science news delivered to your inbox, every day.Sign up to newsletterWhen we put the membranes [close] together, they will affect each other, making the process continue, says Yang. Theres a feedback between the two processes.In tests, the researchers found that total oil recovery increases from just 5 per cent to 97 per cent and water recovery increases from 19 per cent to 75 per cent as the channel width is narrowed from 125 millimetres to 4 millimetres. The purity of the recovered oil and water is more than 99.9 per cent, with only small amounts of waste left, says Yang.The team is in talks with industry and Yang believes that the process is so simple that it could easily be scaled up to suitable levels within a few years.Journal referenceScience DOI: 10.1126/science.adt2513Topics:chemistry
    0 Comments ·0 Shares ·135 Views
  • Whats next for reproductive rights in the US
    www.technologyreview.com
    This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here. Earlier this week, Americans cast their votes in a seminal presidential election. But it wasnt just the future president of the US that was on the ballot. Ten states also voted on abortion rights. Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states, and millions of people in the US have lost access to local clinics. Now, some states are voting to extend and protect access to abortion. This week, seven states voted in support of such measures. And voters in Missouri, a state that has long restricted access, have voted to overturn its ban. Its not all good news for proponents of reproductive rightssome states voted against abortion access. And questions remain over the impact of a second term under former president Donald Trump, who is set to return to the post in January. Roe v. Wade, the legal decision that enshrined a constitutional right to abortion in the US in 1973, guaranteed the right to an abortion up to the point of fetal viability, which is generally considered to be around 24 weeks of pregnancy. It was overturned by the US Supreme Court in the summer of 2022. Within 100 days of the decision, 13 states had enacted total bans on abortion from the moment of conception. Clinics in these states could no longer offer abortions. Other states also restricted abortion access. In that 100-day period, 66 of the 79 clinics across 15 states stopped offering abortion services, and 26 closed completely, according to research by the Guttmacher Institute. The political backlash to the decision was intense. This week, abortion was on the ballot in 10 states: Arizona, Colorado, Florida, Maryland, Missouri, Montana, Nebraska, Nevada, New York, and South Dakota. And seven of them voted in support of abortion access. The impact of these votes will vary by state. Abortion was already legal in Maryland, for example. But the new measures should make it more difficult for lawmakers to restrict reproductive rights in the future. In Arizona, abortions after 15 weeks had been banned since 2022. There, voters approved an amendment to the state constitution that will guarantee access to abortion until fetal viability. Missouri was the first state to enact an abortion ban once Roe v. Wade was overturned. The states current Right to Life of the Unborn Child Act prohibits doctors from performing abortions unless there is a medical emergency. It has no exceptions for rape or incest. This week, the state voted to overturn that ban and protect access to abortion up to fetal viability. Not all states voted in support of reproductive rights. Amendments to expand access failed to garner enough support in Nebraska, South Dakota, and Florida. In Florida, for example, where abortions after six weeks of pregnancy are banned, an amendment to protect access until fetal viability got 57% of the vote, falling just short of the 60% the state required for it to pass. Its hard to predict how reproductive rights will fare over the course of a second Trump term. Trump himself has been inconsistent on the issue. During his first term, he installed members of the Supreme Court who helped overturn Roe v. Wade. During his most recent campaign he said that decisions on reproductive rights should be left to individual states. Trump, himself a Florida resident, has refused to comment on how he voted in the states recent ballot question on abortion rights. When asked, he said that the reporter who posed the question should just stop talking about that, according to the Associated Press. State decisions can affect reproductive rights beyond abortion access. Just look at Alabama. In February, the Alabama Supreme Court ruled that frozen embryos can be considered children under state law. Embryos are routinely cryopreserved in the course of in vitro fertilization treatment, and the ruling was considered likely to significantly restrict access to IVF in the state. (In March, the state passed another law protecting clinics from legal repercussions should they damage or destroy embryos during IVF procedures, but the status of embryos remains unchanged.) The fertility treatment became a hot topic during this year's campaign. In October, Trump bizarrely referred to himself as the father of IVF. That title is usually reserved for Robert Edwards, the British researcher who won the 2010 Nobel prize in physiology or medicine for developing the technology in the 1970s. Whatever is in store for reproductive rights in the US in the coming months and years, all weve seen so far suggests that its likely to be a bumpy ride. Now read the rest of The Checkup Read more from MIT Technology Review's archive My colleague Rhiannon Williams reported on the immediate aftermath of the decision that reversed Roe v. Wade when it was announced a couple of years ago. The Alabama Supreme Court ruling on embryos could also affect the development of technologies designed to serve as artificial wombs, as Antonio Regalado explained at the time. Other technologies are set to change the way we have babies. Some, which could lead to the creation of children with four parents or none at all, stand to transform our understanding of parenthood. Weve also reported on attempts to create embryo-like structures using stem cells. These structures look like embryos but are created without eggs or sperm. Theres a wild race afoot to make these more like the real thing. But both scientific and ethical questions remain over how far we canandshould go. My colleagues have been exploring what the US election outcome might mean for climate policies. Senior climate editor James Temple writes that Trumps victory is a stunning setback for climate change. And senior reporter Casey Crownhart explains how efforts including a trio of laws implemented by the Biden administration, which massively increased climate funding, could be undone. From around the web Donald Trump has said hell let Robert F. Kennedy Jr. go wild on health. Heres where the former environmental lawyer and independent candidatewho has no medical or public health degreesstands on vaccines, fluoride, and the Affordable Care Act. (New York Times) Bird flu has been detected in pigs on a farm in Oregon. Its a worrying development that virologists were dreading. (The Conversation) And, in case you need it, heres some lighter reading: Scientists are sequencing the DNA of tiny marine plankton for the first time. (Come for the story of the scientific expedition; stay for the beautiful images of jellies and sea sapphires.) (The Guardian) Dolphins are known to communicate with whistles and clicks. But scientists were surprised to find a highly vocal solitary dolphin in the Baltic Sea. They think the animal is engaging in dolphin self-talk. (Bioacoustics) How much do you know about baby animals? Test your knowledge in this quiz. (National Geographic)
    0 Comments ·0 Shares ·165 Views