MIT Technology Review
MIT Technology Review
Our in-depth reporting on innovation reveals and explains what’s really happening now to help you know what’s coming next. Get our journalism: http://technologyreview.com/newsletters.
1 people like this
210 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Therecan beno winners in a US-China AI arms race
    www.technologyreview.com
    The United States and China are entangled in what many have dubbed an AI arms race. In the early days of this standoff, US policymakers drove an agenda centered on winning the race, mostly from an economic perspective. In recent months, leading AI labs such asOpenAIandAnthropicgot involved in pushing the narrative of beating China in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the UScanwin in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AIsscaling laws. But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achievenear equivalent resultswhile using only a small fraction of the compute resources available to the leading Western labs. The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employedchokepoint tacticsto limit Chinas access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire. Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls tohold back Chinas progress on AI and advanced semiconductors is a fools errand.Ironically, the unprecedented export control packages targeting Chinas semiconductor and AI sectors have unfolded alongside tentativebilateral and multilateral engagementsto establish AI safety standards and governance frameworkshighlighting a paradoxical desire of both sides to compete and cooperate. When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced modelsinstead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat. It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severeundermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole. Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. Arecent reportfrom the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This winner takes all logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of aManhattan Project for AIand redirection of US military resources fromUkraine toward China. Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recentlyposted on January 17 that hed restarted direct dialoguewith Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be partners and friends. The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory. The promise of AI for good Western mass media usually focuses on attention-grabbing issues described in terms like the existential risks of evil AI. Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more likecollaborative acceleration. It is important to note the significant difference betweenthe way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there well need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) wereborn or educatedin China, according toindustry studies.Its hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypotheticalthey could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance. Our recommendations for policymakers: Reduce national security dominance over AI policy.Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 2.Promote bilateral and multilateral AI governance.Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all. 3.Expand investment in detection and mitigation of AI misuse.The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally. 4.Create incentives for collaborative AI research.Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to theCERN for AIwill bring much more value to the world, and a peaceful end, than aManhattan Project for AI,which is being promoted by many in Washington today. 5.Establish trust-building measures.Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship. 6.Support the development of a global AI safety coalition.A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem. 7.Shift the focus toward AI for global challenges.It is crucial that the worlds two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. The opportunity to harness AI for the common good is a chance the world cannot afford to miss. Alvin Wang Graylin Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the companys China president from 2016 to 2023. He is the author ofOur Next Reality. Paul Triolo Paul Triolo is apartner for China and technology policy leadat DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.
    0 Comments ·0 Shares ·42 Views
  • A new company plans to use Earth as a chemical reactor
    www.technologyreview.com
    Forget massive steel tankssome scientists want to make chemicals with the help of rocks deep beneath Earths surface. New research shows that ammonia, a chemical crucial for fertilizer, can be produced from rocks at temperatures and pressures that are common in the subsurface. The research was published today in Joule, and MIT Technology Review can exclusively report that a new company, called Addis Energy, was founded to commercialize the process. Ammonia is used in most fertilizers and is a vital part of our modern food system. Its also being considered for use as a green fuel in industries like transoceanic shipping. The problem is that current processes used to make ammonia require a lot of energy and produce huge amounts of the greenhouse gases that cause climate changeover 1% of the global total. The new study finds that the planets internal conditions can be used to produce ammonia in a much cleaner process. Earth can be a factory for chemical production, says Iwnetim Abate, an MIT professor and author of the new study. This idea could be a major change for the chemical industry, which today relies on huge facilities running reactions at extremely high temperatures and pressures to make ammonia. The key ingredients for ammonia production are sources of nitrogen and hydrogen. Much of the focus on cleaner production methods currently lies in finding new ways to make hydrogen, since that chemical makes up the bulk of ammonias climate footprint, says Patrick Molloy, a principal at the nonprofit research agency Rocky Mountain Institute. Recently, researchers and companies have located naturally occurring deposits of hydrogen underground. Iron-rich rocks tend to drive reactions that produce the gas, and these natural deposits could provide a source of low-cost, low-emissions hydrogen. While geologic hydrogen is still in its infancy as an industry, some researchers are hoping to help the process along by stimulating production of hydrogen underground. With the right rocks, heat, and a catalyst, you can produce hydrogen cheaply and without emitting large amounts of climate pollution. Hydrogen can be difficult to transport, though, so Abate was interested in going one step further by letting the conditions underground do the hard work in powering chemical reactions that transform hydrogen and nitrogen into ammonia. As you dig, you get heat and pressure for free, he says. To test out how this might work, Abate and his team crushed up iron-rich minerals and added nitrates (a nitrogen source), water (a hydrogen source), and a catalyst to help reactions along in a small reactor in the lab. They found that even at relatively low temperatures and pressures, they could make ammonia in a matter of hours. If the process were scaled up, the researchers estimate, one well could produce 40,000 tons of ammonia per day. While the reactions tend to go faster at high temperature and pressure, the researchers found that ammonia production could be an economically viable process even at 130 C (266 F) and a little over two atmospheres of pressure, conditions that would be accessible at depths reachable with existing drilling technology. While the reactions work in the lab, theres a lot of work to do to determine whether, and how, the process might actually work in the field. One thing the team will need to figure out is how to keep reactions going, because in the reaction that forms ammonia, the surface of the iron-rich rocks will be oxidized, leaving them in a state where they cant keep reacting. But Abate says the team is working on controlling how thick the unusable layer of rock is, and its composition, so the chemical reactions can continue. To commercialize this work, Abate is cofounding a company called Addis Energy with $4.25 million in pre-seed funds from investors including Engine Ventures. His cofounders include Michael Alexander and Charlie Mitchell (who have both spent time in the oil and gas industry) and Yet-Ming Chiang, an MIT professor and serial entrepreneur. The company will work on scaling up the research, including finding potential sites with the geological conditions to produce ammonia underground. The good news for scale-up efforts is that much of the necessary technology already exists in oil and gas operations, says Alexander, Addiss CEO. A field-deployed system will involve drilling, pumping fluid down into the ground, and extracting other fluids from beneath the surface, all very common operations in that industry. Theres novel chemistry thats wrapped in an oil and gas package, he says. The team will also work on refining cost estimates for the process and gaining a better understanding of safety and sustainability, Abate says. Ammonia is a toxic industrial chemical, but its common enough for there to be established procedures for handling, storing, and transporting it, says RMIs Molloy. Judging from the researchers early estimates, ammonia produced with this method could cost up to $0.55 per kilogram. Thats more than ammonia produced with fossil fuels today ($0.40/kg), but the technique would likely be less expensive than other low-emissions methods of producing the chemical. Tweaks to the process, including using nitrogen from the air instead of nitrates, could help cut costs further, even as low as $0.20/kg. New approaches to making ammonia could be crucial for climate efforts. Its a chemical thats essential to our way of life, says Karthish Manthiram, a professor at Caltech who studies electrochemistry, including alternative ammonia production methods. The teams research appears to be designed with scalability in mind from the outset, and using Earth itself as a reactor is the kind of thinking needed to accelerate the long-term journey to sustainable chemical production, Manthiram adds. While the company focuses on scale-up efforts, theres plenty of fundamental work left for Abate and other labs to do to understand whats going on during the reactions at the atomic level, particularly at the interface between the rocks and the reacting fluid. Research in the lab is exciting, but its only the first step, Abate says. The next one is seeing if this actually works in the field.
    0 Comments ·0 Shares ·36 Views
  • The Download: AI for cancer diagnosis, and HIV prevention
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Why its so hard to use AI to diagnose cancer Finding and diagnosing cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes. They look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread. Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis.Were starting to see lots of new efforts to build such a modelat least seven attempts in the last year alone. But they all remain experimental. What will it take to make them good enough to be used in the real world? Read the full story. James O'Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Long-acting HIV prevention meds: 10 Breakthrough Technologies 2025 In June 2024, results from a trial of a new medicine to prevent HIV were announcedand they were jaw-dropping. Lenacapavir, a treatment injected once every six months, protected over 5,000 girls and women in Uganda and South Africa from getting HIV. And it was 100% effective. So far, the FDA has approved the drug only for people who already have HIV thats resistant to other treatments. But its producer Gilead has signed licensing agreements with manufacturers to produce generic versions for HIV prevention in 120 low-income countries. The United Nations has set a goal of ending AIDS by 2030. Its ambitious, to say the least: We still see over 1 million new HIV infections globally every year. But we now have the medicines to get us there. What we need is access. Read the full story. Jessica Hamzelou Long-acting HIV prevention meds is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Donald Trump signed an executive order delaying TikToks ban Parent company ByteDance has 75 days to reach a deal to stay live in the US. (WP $)+ China appears to be keen to keep the platform operating, too. (WSJ $)2 Neo-Nazis are celebrating Elon Musks salutes Theyre thrilled by the two Nazi-like salutes he gave at a post-inauguration rally. (Wired $)+ Whether the gestures were intentional or not, extremists have chosen to interpret them that way. (Rolling Stone $)+ MAGA is all about granting unchecked power to the already powerful. (Vox)+ How tech billionaires are hoping Trump will reward them for their support. (NY Mag $) 3 Trump is withdrawing the US from the World Health OrganizationHes accused the agency of mishandling the covid 19 pandemic. (Ars Technica)+ He first tried to leave the WHO in 2020, but failed to complete it before he left office. (Reuters) + Trump is also working on pulling the US out of the Paris climate agreement. (The Verge)4 Meta will keep using fact checkers outside the USfor now It wants to see how its crowdsourced fact verification system works in America before rolling it out further. (Bloomberg $)5 Startup Friend has delayed shipments of its AI necklace Customers are unlikely to receive their pre-orders before Q3. (TechCrunch)+ Introducing: The AI Hype Index. (MIT Technology Review)6 This sophisticated tool can pinpoint where a photo was taken in seconds Members of the public have been trying to use GeoSpy for nefarious means for months. (404 Media)7 Los Angeles is covered in ashAnd it could take years before it fully disappears. (The Atlantic $) 8 Singapore is turning to AI companions to care for its eldersRobots are filling the void left by an absence of human nurses. (Rest of World) + Inside Japans long experiment in automating elder care. (MIT Technology Review)9 The lost art of using a pen Typing and swiping are replacing good old fashioned paper and ink. (The Guardian)10 LinkedIn is getting humorous Posts are getting more personal, with a decidedly comedic bent. (FT $) Quote of the day Its been really beautiful to watch how two communities that would be considered polar opposites have come together. Khalil Bowens, a content creator based in Los Angeles, reflects on the influx of Americans joining Chinese social media app Xiaohongshu to the Wall Street Journal. The big story Inside the messy ethics of making war with machines August 2023 In recent years, intelligent autonomous weaponsweapons that can select and fire upon targets without any human inputhave become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever. Intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. However, these systems have become sophisticated enough to raise novel questionsones that are surprisingly tricky to answer. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story. Arthur Holland Michel We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Baby octopuses arent just cutethey can change color from the moment theyre born + Nintendo artist Takaya Imamura played a key role in making the company the gaming juggernaut it is today.+ David Lynch wasnt just a master of imagery, the way he deployed music to creep us out was second to none.+ Only got a bag of rice in the cupboard? No problem.
    0 Comments ·0 Shares ·39 Views
  • Why its so hard to use AI to diagnose cancer
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Peering into the body to find and diagnose cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes and look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread. In theory,artificial intelligence should be great at helping out. Our job is pattern recognition, says Andrew Norgan, a pathologist and medical director of the Mayo Clinics digital pathology platform. We look at the slide and we gather pieces of information that have been proven to be important. Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis. Were starting to see lots of new efforts to build such a modelat least seven attempts in the last year alonebut they all remain experimental. Details about the latest effort to build such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, but it reveals much about the challenges of bringing such a tool to real clinical settings. The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the models predictions are compared with the correct answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest score for categorizing cancerous colorectal tissue, reaching the same conclusion as human pathologists 97.1% of the time. For another task, thoughclassifying tumors from prostate cancer biopsiesAtlas beat the other models high scores with a score of just 70.5%. Its average across nine benchmarks showed that it got the same answers as human experts 84.6% of the time. Lets think about what this means. The best way to know whats happening to cancerous cells in tissues is to have a sample examined by a pathologist, so thats the performance that AI models are measured against. The best models are approaching humans in particular detection tasks but lagging behind in many others. So how good does a model have to be to be clinically useful? Ninety percent is probably not good enough. You need to be even better, says Carlo Bifulco, chief medical officer at Providence Genomics and co-creator of GigaPath, one of the other AI pathology models examined in the Mayo Clinic study. But, Bifulco says, AI models that dont score perfectly can still be useful in the short term, and could potentially help pathologists speed up their work and make diagnoses more quickly. What obstacles are getting in the way of better performance? Problem number one is training data. Fewer than 10% of pathology practices in the US are digitized, Norgan says. That means tissue samples are placed on slides and analyzed under microscopes, and then stored in massive registries without ever being documented digitally. Though European practices tend to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to train on, theres still not a ton to work with. Without diverse data sets, AI models struggle to identify the wide range of abnormalities that human pathologists have learned to interpret. That includes for rare diseases, says Maximilian Alber, cofounder and CTO of Aignostics. Scouring the publicly available databases for tissue samples of particularly rare diseases, youll find 20 samples over 10 years, he says. Around 2022, the Mayo Clinic foresaw that this lack of training data would be a problem. It decided to digitize all of its own pathology practices moving forward, along with 12 million slides from its archives dating back decades (patients had consented to their being used for research). It hired a company to build a robot that began taking high-resolution photos of the tissues, working through up to a million samples per month. From these efforts, the team was able to collect the 1.2 million high-quality samples used to train the Mayo model. This brings us to problem number two for using AI to spot cancer. Tissue samples from biopsies are tinyoften just a couple of millimeters in diameterbut are magnified to such a degree that digital images of them contain more than 14 billion pixels. That makes them about 287,000 times larger than images used to train the best AI image recognition models to date. That obviously means lots of storage costs and so forth, says Hoifung Poon, an AI researcher at Microsoft who worked with Bifulco to create GigaPath, which was featured in Nature Thirdly, theres the question of which benchmarks are most important for a cancer-spotting AI model to perform well on. The Atlas researchers tested their model in the challenging domain of molecular-related benchmarks, which involves trying to find clues from sample tissue images to guess whats happening on a molecular level. Heres an example: Your bodys mismatch repair genes are of particular concern for cancer, because they catch errors made when your DNA gets replicated. If these errors arent caught, they can drive the development and progression of cancer. Some pathologists might tell you they kind of get a feeling when they think somethings mismatch-repair deficient based on how it looks, Norgan says. But pathologists dont act on that gut feeling alone. They can do molecular testing for a more definitive answer. What if instead, Norgan says, we can use AI to predict whats happening on the molecular level? Its an experiment: Could the AI model spot underlying molecular changes that humans cant see? Generally no, it turns out. Or at least not yet. Atlass average for the molecular testing was 44.9%. Thats the best performance for AI so far, but it shows this type of testing has a long way to go. Bifulco says Atlas represents incremental but real progress. My feeling, unfortunately, is that everybody's stuck at a similar level, he says. We need something different in terms of models to really make dramatic progress, and we need larger data sets. Now read the rest of The Algorithm Deeper Learning OpenAI has created an AI model for longevity science AI has long had its fingerprints on the science of protein folding. But OpenAI now says its created a model that can engineer proteins, turning regular cells into stem cells. That goal has been pursued by companies in longevity science, because stem cells can produce any other tissue in the body and, in theory, could be a starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells. Why it matters: The work was a product of OpenAIs collaboration with the longevity company Retro Labs, in which Sam Altman invested $180 million. It represents OpenAIs first model focused on biological data and its first public claim that its models can deliver scientific results. The AI model reportedly engineered more effective proteins, and more quickly, than the companys scientists could. But outside scientists cant evaluate the claims until the studies have been published. Read more from Antonio Regalado. Bits and Bytes What we know about the TikTok ban The popular video app went dark in the United States late Saturday and then came back around noon on Sunday, even as a law banning it took effect. (The New York Times) Why Meta might not end up like X X lost lots of advertising dollars as Elon Musk changed the platform's policies. But Facebook and Instagrams massive scale make them hard platforms for advertisers to avoid. (Wall Street Journal) What to expect from Neuralink in 2025 More volunteers will get Elon Musks brain implant, but dont expect a product soon. (MIT Technology Review) A former fact-checking outlet for Meta signed a new deal to help train AI models Meta paid media outlets like Agence France-Presse for years to do fact checking on its platforms. Since Meta announced it would shutter those programs, Europes leading AI company, Mistral, has signed a deal with AFP to use some of its content in its AI models. (Financial Times) OpenAIs AI reasoning model thinks in Chinese sometimes, and no one really knows why While it comes to its response, the model often switches to Chinese, perhaps a reflection of the fact that many data labelers are based in China. (Tech Crunch)
    0 Comments ·0 Shares ·53 Views
  • The Download: AIs coding promises, and OpenAIs longevity push
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. The second wave of AI coding is here Ask people building generative AI what generative AI is good for right nowwhat theyre really fired up aboutand many will tell you: coding. Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But theres more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights.Read the full story.Will Douglas Heaven OpenAI has created an AI model for longevity science When you think of AIs contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says its getting into the science game toowith a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cellsand that it has handily beat humans at the task. The work represents OpenAIs first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. But until outside scientists get their hands on it, we cant say just how impressive it really is. Read the full story. Antonio Regalado Cleaner jet fuel: 10 Breakthrough Technologies 2025 New fuels made from used cooking oil, industrial waste, or even gases in the air could help power planes without fossil fuels. Depending on the source, they can reduce emissions by half or nearly eliminate them. And they can generally be used in existing planes, which could enable quick climate progress. These alternative jet fuels have been in development for years, but now theyre becoming a big business, with factories springing up to produce them and new government mandates requiring their use. So while only about 0.5% of the roughly 100 billion gallons of jet fuel consumed by planes last year was something other than fossil fuel, that could soon change. Read the full story.Casey Crownhart Cleaner jet fuel is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 TikTok is back online in the US The company thanked Donald Trump for vowing to fight the federal ban its facing. (The Verge)+ The app went dark for users in America for around 14 hours. (WP $)+ AI search startup Perplexity has suggested merging with TikTok. (CNBC)+ Heres how people actually make money on TikTok. (WSJ $)2 Trumps staff has an Elon Musk problem Aides are annoyed by his constant contributions to matters he has little knowledge of. (WSJ $)+ A power struggle between the two men is inevitable. (Slate $)+ The great and the good of crypto attended a VIP Trump party on Friday. (NY Mag $)3 AI is speeding up the Pentagons kill list Although the US military cant use the tech to directly kill humans, AI is making it faster and easier to plan how to do just that. (TechCrunch)+ OpenAIs new defense contract completes its military pivot. (MIT Technology Review)4 The majority of Americans havent had their latest covid booster Though they could help to protect youand others. (Undark)+ Its five years today since the US registered its first covid case. (USA Today)5 Europol is cracking down on encryption The agency plans to pressure Big Tech to give police access to encrypted messages. (FT $)6 This Swiss startup has created a powerful robotic wormBorobotics wants to deploy the bots to dig for geo-thermal heat in our gardens. (The Next Web) 7 Thousands of lithium batteries were destroyed in a massive fireThe worlds largest battery storage plant went up in flames in California. (New Scientist $) + Three takeaways about the current state of batteries. (MIT Technology Review)8 Amazons delivery drones struggle in the rain Bloomberg $) 9 A Ring doorbell captured a meteorite crashing to Earth Its the first known example of a meteorite fall documented by a doorbell cam. (CBS News)10 AI is coming for your wardrobe A wave of new apps will suggest what to wear and what to pair it with. (The Guardian)Quote of the day "TikTok was 100x better than anything you've created. An Instagram user snaps at Facebook founder Mark Zuckerberg in the wake of TikToks temporary US blackout over the weekend. The big story Running Tide is facing scientist departures and growing concerns over seaweed sinking for carbon removal June 2022 Running Tide, an aquaculture company based in Portland, Maine, hopes to set tens of thousands of tiny floating kelp farms adrift in the North Atlantic. The idea is that the fast-growing macroalgae will eventually sink to the ocean floor, storing away thousands of tons of carbon dioxide in the process. The company has raised millions in venture funding and gained widespread media attention. But it struggled to grow kelp along rope lines in the open ocean during initial attempts last year and has lost a string of scientists in recent months, sources with knowledge of the matter tell MIT Technology Review. What happens next? Read the full story. James Temple We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Why not cheer up your Monday with the kings of merriment, The Smiths?+ This is fascinating: how fish detect color and why its so different to us humans.+ The people of Finland know a thing or two about happiness.+ Its time to get planning a spring getaway, and these destinations look just fabulous.
    0 Comments ·0 Shares ·64 Views
  • The second wave of AI coding is here
    www.technologyreview.com
    Ask people building generative AI what generative AI is good for right nowwhat theyre really fired up aboutand many will tell you: coding. Thats something thats been very exciting for developers, Jared Kaplan, chief scientist at Anthropic, told MIT Technology Review this month: Its really understanding whats wrong with code, debugging it. Copilot, a tool built on top of OpenAIs large language models and launched by Microsoft-backed GitHub in 2022, is now used by millions of developers around the world. Millions more turn to general-purpose chatbots like Anthropics Claude, OpenAIs ChatGPT, and Google DeepMinds Gemini for everyday help. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers, Alphabet CEO Sundar Pichai claimed on an earnings call in October: This helps our engineers do more and move faster. Expect other tech companies to catch up, if they havent already. Its not just the big beasts rolling out AI coding tools. A bunch of new startups have entered this buzzy market too. Newcomers such as Zencoder, Merly, Cosine, Tessl (valued at $750 million within months of being set up), and Poolside (valued at $3 billion before it even released a product) are all jostling for their slice of the pie. It actually looks like developers are willing to pay for copilots, says Nathan Benaich, an analyst at investment firm Air Street Capital: And so code is one of the easiest ways to monetize AI. Such companies promise to take generative coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, like most existing tools, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves. But theres more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence (AGI), the hypothetical superhuman technology that a number of top firms claim to have in their sights. The first time we will see a massively economically valuable activity to have reached human-level capabilities will be in software development, says Eiso Kant, CEO and cofounder of Poolside. (OpenAI has already boasted that its latest o3 model beat the companys own chief scientist in a competitive coding challenge.) Welcome to the second wave of AI coding. Correct code Software engineers talk about two types of correctness. Theres the sense in which a programs syntax (its grammar) is correctmeaning all the words, numbers, and mathematical operators are in the right place. This matters a lot more than grammatical correctness in natural language. Get one tiny thing wrong in thousands of lines of code and none of it will run. The first generation of coding assistants are now pretty good at producing code thats correct in this sense. Trained on billions of pieces of code, they have assimilated the surface-level structures of many types of programs. But theres also the sense in which a programs function is correct: Sure, it runs, but does it actually do what you wanted it to? Its that second level of correctness that the new wave of generative coding assistants are aiming forand this is what will really change the way software is made. Large language models can write code that compiles, but they may not always write the program that you wanted, says Alistair Pullen, a cofounder of Cosine. To do that, you need to re-create the thought processes that a human coder would have gone through to get that end result. The problem is that the data most coding assistants have been trained onthe billions of pieces of code taken from online repositoriesdoesnt capture those thought processes. It represents a finished product, not what went into making it. Theres a lot of code out there, says Kant. But that data doesnt represent software development. What Pullen, Kant, and others are finding is that to build a model that does a lot more than autocompleteone that can come up with useful programs, test them, and fix bugsyou need to show it a lot more than just code. You need to show it how that code was put together. In short, companies like Cosine and Poolside are building models that dont just mimic what good code looks likewhether it works well or notbut mimic the process that produces such code in the first place. Get it right and the models will come up with far better code and far better bug fixes. Breadcrumbs But you first need a data set that captures that processthe steps that a human developer might take when writing code. Think of these steps as a breadcrumb trail that a machine could follow to produce a similar piece of code itself. Part of that is working out what materials to draw from: Which sections of the existing codebase are needed for a given programming task? Context is critical, says Zencoder founder Andrew Filev. The first generation of tools did a very poor job on the context, they would basically just look at your open tabs. But your repo [code repository] might have 5000 files and theyd miss most of it. Zencoder has hired a bunch of search engine veterans to help it build a tool that can analyze large codebases and figure out what is and isnt relevant. This detailed context reduces hallucinations and improves the quality of code that large language models can produce, says Filev: We call it repo grokking. Cosine also thinks context is key. But it draws on that context to create a new kind of data set. The company has asked dozens of coders to record what they were doing as they worked through hundreds of different programming tasks. We asked them to write down everything, says Pullen: Why did you open that file? Why did you scroll halfway through? Why did you close it? They also asked coders to annotate finished pieces of code, marking up sections that would have required knowledge of other pieces of code or specific documentation to write. Cosine then takes all that information and generates a large synthetic data set that maps the typical steps coders take, and the sources of information they draw on, to finished pieces of code. They use this data set to train a model to figure out what breadcrumb trail it might need to follow to produce a particular program, and then how to follow it. Poolside, based in San Francisco, is also creating a synthetic data set that captures the process of coding, but it leans more on a technique called RLCEreinforcement learning from code execution. (Cosine uses this too, but to a lesser degree.) RLCE is analogous to the technique used to make chatbots like ChatGPT slick conversationalists, known as RLHFreinforcement learning from human feedback. With RLHF, a model is trained to produce text thats more like the kind human testers say they favor. With RLCE, a model is trained to produce code thats more like the kind that does what it is supposed to do when it is run (or executed). Gaming the system Cosine and Poolside both say they are inspired by the approach DeepMind took with its game-playing model AlphaZero. AlphaZero was given the steps it could takethe moves in a gameand then left to play against itself over and over again, figuring out via trial and error what sequence of moves were winning moves and which were not. They let it explore moves at every possible turn, simulate as many games as you can throw compute atthat led all the way to beating Lee Sedol, says Pengming Wang, a founding scientist at Poolside, referring to the Korean Go grandmaster that AlphaZero beat in 2016. Before Poolside, Wang worked at Google DeepMind on applications of AlphaZero beyond board games, including FunSearch, a version trained to solve advanced math problems. When that AlphaZero approach is applied to coding, the steps involved in producing a piece of codethe breadcrumbsbecome the available moves in a game, and a correct program becomes winning that game. Left to play by itself, a model can improve far faster than a human could. A human coder tries and fails one failure at a time, says Kant. Models can try things 100 times at once. A key difference between Cosine and Poolside is that Cosine is using a custom version of GPT-4o provided by OpenAI, which makes it possible to train on a larger data set than the base model can cope with, but Poolside is building its own large language model from scratch. Poolsides Kant thinks that training a model on code from the start will give better results than adapting an existing model that has sucked up not only billions of pieces of code but most of the internet. Im perfectly fine with our model forgetting about butterfly anatomy, he says. Cosine claims that its generative coding assistant, called Genie, tops the leaderboard on SWE-Bench, a standard set of tests for coding models. Poolside is still building its model but claims that what it has so far already matches the performance of GitHubs Copilot. I personally have a very strong belief that large language models will get us all the way to being as capable as a software developer, says Kant. Not everyone takes that view, however. Illogical LLMs To Justin Gottschlich, the CEO and founder of Merly, large language models are the wrong tool for the jobperiod. He invokes his dog: No amount of training for my dog will ever get him to be able to code, it just won't happen, he says. He can do all kinds of other things, but hes just incapable of that deep level of cognition. Having worked on code generation for more than a decade, Gottschlich has a similar sticking point with large language models. Programming requires the ability to work through logical puzzles with unwavering precision. No matter how well large language models may learn to mimic what human programmers do, at their core they are still essentially statistical slot machines, he says: I cant train an illogical system to become logical. Instead of training a large language model to generate code by feeding it lots of examples, Merly does not show its system human-written code at all. Thats because to really build a model that can generate code, Gottschlich argues, you need to work at the level of the underlying logic that code represents, not the code itself. Merlys system is therefore trained on an intermediate representationsomething like the machine-readable notation that most programming languages get translated into before they are run. Gottschlich wont say exactly what this looks like or how the process works. But he throws out an analogy: Theres this idea in mathematics that the only numbers that have to exist are prime numbers, because you can calculate all other numbers using just the primes. Take that concept and apply it to code, he says. Not only does this approach get straight to the logic of programming; its also fast, because millions of lines of code are reduced to a few thousand lines of intermediate language before the system analyzes them. Shifting mindsets What you think of these rival approaches may depend on what you want generative coding assistants to be. In November, Cosine banned its engineers from using tools other than its own products. It is now seeing the impact of Genie on its own engineers, who often find themselves watching the tool as it comes up with code for them. You now give the model the outcome you would like, and it goes ahead and worries about the implementation for you, says Yang Li, another Cosine cofounder. Pullen admits that it can be baffling, requiring a switch of mindset. We have engineers doing multiple tasks at once, flitting between windows, he says. While Genie is running code in one, they might be prompting it to do something else in another. These tools also make it possible to protype multiple versions of a system at once. Say youre developing software that needs a payment system built in. You can get a coding assistant to simultaneously try out several different optionsStripe, Mango, Checkoutinstead of having to code them by hand one at a time. Genie can be left to fix bugs around the clock. Most software teams use bug-reporting tools that let people upload descriptions of errors they have encountered. Genie can read these descriptions and come up with fixes. Then a human just needs to review them before updating the code base. No single human understands the trillions of lines of code in todays biggest software systems, says Li, and as more and more software gets written by other software, the amount of code will only get bigger. This will make coding assistants that maintain that code for us essential. The bottleneck will become how fast humans can review the machine-generated code, says Li. How do Cosines engineers feel about all this? According to Pullen, at least, just fine. If I give you a hard problem, youre still going to think about how you want to describe that problem to the model, he says. Instead of writing the code, you have to write it in natural language. But theres still a lot of thinking that goes into that, so youre not really taking the joy of engineering away. The itch is still scratched. Some may adapt faster than others. Cosine likes to invite potential hires to spend a few days coding with its team. A couple of months ago it asked one such candidate to build a widget that would let employees share cool bits of software they were working on to social media. The task wasnt straightforward, requiring working knowledge of multiple sections of Cosines millions of lines of code. But the candidate got it done in a matter of hours. This person who had never seen our code base turned up on Monday and by Tuesday afternoon hed shipped something, says Li. We thought it would take him all week. (They hired him.) But theres another angle too. Many companies will use this technology to cut down on the number of programmers they hire. Li thinks we will soon see tiers of software engineers. At one end there will be elite developers with million-dollar salaries who can diagnose problems when the AI goes wrong. At the other end, smaller teams of 10 to 20 people will do a job that once required hundreds of coders. It will be like how ATMs transformed banking, says Li. Anything you want to do will be determined by compute and not head count, he says. I think its generally accepted that the era of adding another few thousand engineers to your organization is over. Warp drives Indeed, for Gottschlich, machines that can code better than humans are going to be essential. For him, thats the only way we will build the vast, complex software systems that he thinks we will eventually need. Like many in Silicon Valley, he anticipates a future in which humans move to other planets. Thats only going to be possible if we get AI to build the software required, he says: Merlys real goal is to get us to Mars. Gottschlich prefers to talk about machine programming rather than coding assistants, because he thinks that term frames the problem the wrong way. I dont think that these systems should be assisting humansI think humans should be assisting them, he says. They can move at the speed of AI. Why restrict their potential? Theres this cartoon called The Flintstones where they have these cars, but they only move when the drivers use their feet, says Gottschlich. This is sort of how I feel most people are doing AI for software systems. But what Merlys building is, essentially, spaceships, he adds. Hes not joking. And I dont think spaceships should be powered by humans on a bicycle. Spaceships should be powered by a warp engine. If that sounds wildit is. But theres a serious point to be made about what the people building this technology think the end goal really is. Gottschlich is not an outlier with his galaxy-brained take. Despite their focus on products that developers will want to use today, most of these companies have their sights on a far bigger payoff. Visit Cosines website and the company introduces itself as a Human Reasoning Lab. It sees coding as just the first step toward a more general-purpose model that can mimic human problem-solving in a number of domains. Poolside has similar goals: The company states upfront that it is building AGI. Code is a way of formalizing reasoning, says Kant. Wang invokes agents. Imagine a system that can spin up its own software to do any task on the fly, he says. If you get to a point where your agent can really solve any computational task that you want through the means of softwarethat is a display of AGI, essentially. Down here on Earth, such systems may remain a pipe dream. And yet software engineering is changing faster than many at the cutting edge expected. Were not at a point where everythings just done by machines, but were definitely stepping away from the usual role of a software engineer, says Cosines Pullen. Were seeing the sparks of that new workflowwhat it means to be a software engineer going into the future.
    0 Comments ·0 Shares ·64 Views
  • Deciding the fate of leftover embryos
    www.technologyreview.com
    This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here. Over the past few months, Ive been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a persons uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesnt work. But often it does. For the article, I explored what happens to the healthy embryos that are left over. I spoke to Lisa Holligan, who had IVF in the UK around five years ago. Holligan donated her genetically abnormal embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesnt know what to do with it. Shes not the only one struggling with the decision. Leftover embryos are kept frozen in storage tanks, where they sit in little straws, invisible to the naked eye, their growth paused in a state of suspended animation. What happens next is down to personal choicebut that choice can be limited by a complex web of laws and ethical and social factors. These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. Intended parents will sign a form indicating what they would like to happen to those embryos. Typically, that means deciding early on whether they might like any embryos they dont end up using to be destroyed or donated, either to someone else trying to conceive or for research. But it can be really difficult to make these decisions before youve even started treatment. People seeking fertility treatment will usually have spent a long time trying to get pregnant. They are hoping for healthy embryos, and some cant imagine having any left overor how they might feel about them. For a lot of people, embryos are not just balls of cells. They hold the potential for life, after all. Some people see them as children, waiting to be born. Some even name their embryos, or call them their freezer babies. Others see them as the product of a long, exhausting, and expensive IVF journey. Holligan says that she initially considered donating her embryo to another person, but her husband disagreed. He saw the embryo as their child and said he wouldnt feel comfortable with giving it up to another family. I started having these thoughts about a child coming to me when theyre older, saying theyve had a terrible life, and [asking] Why didnt you have me? she told me. Holligan lives in the UK, where you can store your embryos for up to 55 years. Destroying or donating them are also options. Thats not the case in other countries. In Italy, for example, embryos cannot be destroyed or donated. Any that are frozen will remain that way forever, unless the law changes at some point. In the US, regulations vary by state. The patchwork of laws means that one state can bestow a legal status on embryos, giving them the same rights as children, while another might have no legislation in place at all. No one knows for sure how many embryos are frozen in storage tanks, but the figure is thought to be somewhere between 1 million and 10 million in the US alone. Some of these embryos have been in storage for years or decades. In some cases, the intended parents have deliberately chosen this, opting to pay hundreds of dollars per year in fees. But in other cases, clinics have lost touch with their clients. Many of these former clients have stopped paying for the storage of their embryos, but without up-to-date consent forms, clinics can be reluctant to destroy them. What if the person comes back and wants to use those embryos after all? Most clinics, if they have any hesitation or doubt or question, will err on the side of holding on to those embryos and not discarding them, says Sigal Klipstein, a reproductive endocrinologist at InVia Fertility Center in Chicago, who also chairs the ethics committee of the American Society for Reproductive Medicine. Because its kind of like a one-way ticket. Klipstein thinks one of the reasons why some embryos end up abandoned in storage is that the people who created them cant bring themselves to destroy them. Its just very emotionally difficult for someone who has wanted so much to have a family, she tells me. Klipstein says she regularly talks to her patients about what to do with leftover embryos. Even people who make the decision with confidence can change their minds, she says. Weve all had those patients who have discarded embryos and then come back six months or a year later and said: Oh, I wish I had those embryos, she tells me. Those [embryos may have been] their best chance of pregnancy. Those who do want to discard their embryos have options. Often, the embryos will simply be exposed to air and then disposed of. But some clinics will also offer to transfer them at a time or place where a pregnancy is extremely unlikely to result. This compassionate transfer, as it is known, might be viewed as a more natural way to dispose of the embryo. But its not for everyone. Holligan has experienced multiple miscarriages and wonders if a compassionate transfer might feel similar. She wonders if it might just end up putting [her] body and mind through unnecessary stress. Ultimately, for Holligan and many others in a similar position, the choice remains a difficult one. These are very desired embryos, says Klipstein. The purpose of going through IVF was to create embryos to make babies. And [when people] have these embryos, and theyve completed their family plan, theyre in a place they couldnt have imagined. Now read the rest of The Checkup Read more from MIT Technology Review's archive Our relationship with embryos is unique, and a bit all over the place. Thats partly because we cant agree on their moral status. Are they more akin to people or property, or something in between? Who should get to decide their fate? While we get to the bottom of these sticky questions, millions of embryos are stuck in suspended animationsome of them indefinitely. It is estimated that over 12 million babies have been born through IVF. The development of the Nobel Prizewinning technology behind the procedure relied on embryo research. Some worry that donating embryos for research can be onerousand that valuable embryos are being wasted as a result. Fertility rates around the world are dropping below the levels needed to maintain stable populations. But IVF cant save us from a looming fertility crisis. Gender equality and family-friendly policies are much more likely to prove helpful. Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states. But in November of last year, some states voted to extend and protect access to abortion, and voters in Missouri supported overturning the state's ban. Last year, a ruling by the Alabama Supreme Court that embryos count as children ignited fears over access to fertility treatments in a state that had already banned abortion. The move could also have implications for the development of technologies like artificial uteruses and synthetic embryos, my colleague Antonio Regalado wrote at the time. From around the web Its not just embryos that are frozen as part of fertility treatments. Eggs, sperm, and even ovarian and testicular tissue can be stored too. A man who had immature testicular tissue removed and frozen before undergoing chemotherapy as a child 16 years ago had the tissue reimplanted in a world first, according to the team at University Hospital Brussels that performed the procedure around a month ago. The tissue was placed into the mans testicle and scrotum, and scientists will wait a year before testing to see if he is successfully producing sperm. (UZ Brussel) The Danish pharmaceutical company Novo Nordisk makes half the worlds insulin. Now it is better known as the manufacturer of the semaglutide drug Ozempic. How will the sudden shift affect the production and distribution of these medicines around the world? (Wired) The US has not done enough to prevent the spread of the H5N1 virus in dairy cattle. The response to bird flu is a national embarrassment, argues Katherine J. Wu. (The Atlantic) Elon Musk has said that if all goes well, millions of people will have brain-computer devices created by his company Neuralink implanted within 10 years. In reality, progress is slowerso far, Musk has said that three people have received the devices. My colleague Antonio Regalado predicts what we can expect from Neuralink in 2025. (MIT Technology Review)
    0 Comments ·0 Shares ·56 Views
  • OpenAI has created an AI model for longevity science
    www.technologyreview.com
    When you think of AIs contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says its getting into the science game toowith a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cellsand that it has handily beat humans at the task. The work represents OpenAIs first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. As such, it is a step toward determining whether or not AI can make true discoveries, which some argue is a major test on the pathway to artificial general intelligence. Last week, OpenAI CEO Sam Altman said he was confident his company knows how to build an AGI, adding that superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own. The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. Its a phenomenon that researchers at Retro, and at richly funded companies like Altos Labs, see as the possible starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells. But such cell reprogramming is not very efficient. It takes several weeks, and less than 1% of cells treated in a lab dish will complete the rejuvenation journey. OpenAIs new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the models suggestions to change two of the Yamanaka factors to to be more than 50 times as effectiveat least according to some preliminary measures. Just across the board, the proteins seem better than what the scientists were able to produce by themselves, says John Hallman, an OpenAI researcher. Hallman and OpenAIs Aaron Jaech, as well as Rico Meinl from Retro, were the models lead developers. Outside scientists wont be able to tell if the results are real until theyre published, something the companies say they are planning. Nor is the model available for wider useits still a bespoke demonstration, not an official product launch. This project is meant to show that were serious about contributing to science, says Jaech. But whether those capabilities will come out to the world as a separate model or whether theyll be rolled into our mainline reasoning modelsthats still to be determined. The model does not work the same way as Googles AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While thats a lot of data, its just a fraction of what OpenAIs flagship chatbots were trained on, making GPT-4b an example of a small language model that works with a focused data set. Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the few-shot method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since theyre built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAIs model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. OPENAI We threw this model into the lab immediately and we got real-world results, says Retros CEO, Joe Betts-Lacroix. He says the models ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases. Vadim Gladyshev, a Harvard University aging researcher who consults with Retro, says better ways of making stem cells are needed. For us, it would be extremely useful. [Skin cells] are easy to reprogram, but other cells are not, he says. And to do it in a new speciesits often extremely different, and you dont get anything. How exactly the GPT-4b arrives at its guesses is still not clearas is often the case with AI models. Its like when AlphaGo crushed the best human at Go, but it took a long time to find out why, says Betts-Lacroix. We are still figuring out what it does, and we think the way we apply this is only scratching the surface. OpenAI says no money changed hands in the collaboration. But because the work could benefit Retrowhose biggest investor is Altmanthe announcement may add to questions swirling around the OpenAI CEOs side projects. Last year, the Wall Street Journal said Altmans wide-ranging investments in private tech startups amount to an opaque investment empire that is creating a mounting list of potential conflicts, since some of these companies also do business with OpenAI. In Retros case, simply being associated with Altman, OpenAI, and the race toward AGI could boost its profile and increase its ability to hire staff and raise funds. Betts-Lacroix did not answer questions about whether the early-stage company is currently in fundraising mode. OpenAI says Altman was not directly involved in the work and that it never makes decisions based on Altmans other investments.
    0 Comments ·0 Shares ·27 Views
  • The Download: how to save social media, and leftover embryos
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. We need to protect the protocol that runs Bluesky Eli Pariser & Deepti Doshi Last week, when Mark Zuckerberg announced Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. Its just the latest example of a billionaire flip-flop affecting our social lives on the internet. Zuckerberg isnt the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as the bedrock of a functioning democracy, has suspended journalists, restored tens of thousands of banned users, brought back political advertising, and weakened verification and harassment policies. Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns. The internet doesnt need to be like this. But as luck would have it, a new way is emerging just in time. Read the full story. Deciding the fate of leftover embryos Over the past few months, Ive been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a persons uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesnt work. But often it does. For the article, I explored what happens to the healthy embryos that are left over. These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. But it can be really difficult to make these decisions before youve even started treatment, and some people cant imagine having any left overor how they might feel about them. Read the full story.Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. MIT Technology Review Narrated: Palmer Luckey on the Pentagons future of mixed reality Palmer Luckey, the founder of Oculus VR, has set his sights on a new mixed-reality headset customer: the Pentagon. If designed well, his company Andurils headset will automatically sort through countless pieces of information and flag the most important ones to soldiers in real time. But thats a big if. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which were publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released.The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 The Biden administration wont force through a TikTok ban But TikTok could choose to shut itself down on Sunday to prove a point. (ABC News)+ A Supreme Court decision is expected later today. (NYT $)+ Every platform has a touch of TikTok about it these days. (The Atlantic $) 2 Apple is pausing its AI news feature Because it cant be trusted to meld news stories together without hallucinating. (BBC)+ The company is working on a fix to roll out in a future software update. (WP $)3 Meta is preparing for Donald Trumps mass deportations By relaxing speech policies around immigration, Meta is poised to shape public opinion towards accepting Trumps plans to tear families apart. (404 Media)4 An uncrewed SpaceX rocket exploded during a test flight Elon Musk says it was probably caused by a leak. (WSJ $)5 The FBI believes that hackers accessed its agents call logs The data could link investigators to their secret sources. (Bloomberg $)6 What its like fighting fire with waterDumping water on LAs wildfires may be inelegant, but it is effective. (NY Mag $) + How investigators are attempting to trace the fires origins. (BBC)7 The road to adapting Teslas charges for other EVs is far from smooth But it is happening, slowly but surely. (IEEE Spectrum)+ Donald Trump isnt a fan of EVs, but the market is undoubtedly growing. (Vox)+ Why EV charging needs more than Tesla. (MIT Technology Review)8 Bionic hands are getting far more sensitive FT $) + These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)9 Gen Z cant get enough of astrology apps Stargazing is firmly back e\in vogue among the younger generations. (Economist $) 10 Nintendo has finally unveiled its long-awaited Switch 2 console Only for it to look a whole lot like its predecessor. (WSJ $)+ But itll probably sell a shedload of units anyway. (Wired $)Quote of the day Going viral is like winning the lotterynearly impossible to replicate. Sarah Schauer, a former star on defunct video app Vine, offers creators left nervous by TikToks uncertain future in the US some advice, the Washington Post reports. The big story After 25 years of hype, embryonic stem cells are still waiting for their moment August 2023 In 1998, researchers isolated powerful stem cells from human embryos. It was a breakthrough, since these cells are the starting point for human bodies and have the capacity to turn into any other type of cellheart cells, neurons, you name it. National Geographic would later summarize the incredible promise: "the dream is to launch a medical revolution in which ailing organs and tissues might be repaired with living replacements. It was the dawn of a new era. A holy grail. Pick your favorite clichthey all got airtime. Yet today, more than two decades later, there are no treatments on the market based on these cells. Not one. Our biotech editor Antonio Regalado set out to investigate why, and when that might change. Heres what he discovered. We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + If you're planning on catching up with a friend this weekendstop! You should be hanging out instead.+ David Lynch was a true visionary; an innovative artist and master of the truly weird. The world is a duller place without him.+ The very best instant noodles, ranked ($)+ Congratulations to the highly exclusive Cambridge University Tiddlywinks Club, which is celebrating its 70th anniversary.
    0 Comments ·0 Shares ·39 Views
  • We need to protect the protocol that runs Bluesky
    www.technologyreview.com
    Last week, when Mark Zuckerberg announced Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. Its just the latest example of a billionaire flip-flop affecting our social lives on the internet. After January 6th, Zuckerberg bragged to Congress about Facebooks industry-leading fact-checking program and banned President Trump from the platform. But just two years later, he welcomed Trump back. And last year Zuckerberg was privately reassuring conservative rep Jim Jordan that Meta will no longer demote questionable content while its being fact-checked. Now, not only is Meta ending fact-checking completely, it is loosening rules around hate speech, allowing horrendous personal attacks on migrants or trans people, for example, on its platforms. And Zuckerberg isnt the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as the bedrock of a functioning democracy, has suspended journalists, restored tens of thousands of banned users (including White Nationalists), brought back political advertising, and weakened verification and harassment policies. Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns. And this has led to a constantly shifting, opaque digital environment in which people can lose their communication pathways and livelihoods in a second, with no recourse as the rules shift. The internet doesnt need to be like this. But as luck would have it, a new way is emerging just in time. If youve heard of Bluesky, youve probably heard of it as a clone of Twitter where liberals can take refuge. But under the hood its structured fundamentally differently in a way that could point us to a healthier internet for everyone, regardless of politics or identity. Just like email, Bluesky sits on top of an open protocol. In practice, that means that anyone can build on it. Just like you wouldnt need anyones permission to start a newsletter company built on email, people are starting to share remixed versions of their social media feed, built on Bluesky. This sounds like a small thing, but think about all the harms done by social media companies through their algorithms in the last decade: insurrection, radicalization, self-harm, bullying. Similarly, Bluesky enables users to share blocklists and labels, to collaborate on verification and moderation. Letting people shape their own experience of social media is nothing short of revolutionary. And importantly, if you decide that you dont agree with Blueskys design and moderation decisions, you can build something else on the same infrastructure and use that instead. This is fundamentally different from the dominant, centralized social media that has come before. At the core of Blueskys philosophy is the idea that instead of being centralized in the hands of one person or institution, social media governance should obey the principle of subsidiarity. Nobel Prize-winning economist Elinor Ostrom found, through studying grassroots solutions to local environmental problems around the world, that some problems are best solved locally, while others are best solved at a higher level. In terms of content moderation, posts related to CSAM or terrorism are best handled by professionals keeping millions or billions safe. But a lot of decisions about speech can be solved in each community, or even user by user by assembling a Bluesky blocklist. So all the right elements are currently in place at Bluesky to usher in this new architecture for social media: independent ownership, newfound popularity, a stark contrast with other dominant platforms, and right-minded leadership. But challenges remain, and we cant count on Bluesky doing this right without support. Critics have pointed out that Bluesky has yet to turn a profit and is currently running on venture capital, the same corporate structure that brought us Facebook, Twitter, and other social media companies. As of now, theres no option to exit Bluesky and take your data and network with you, because there are no other servers that run the AT Protocol. Bluesky CEO Jay Graber deserves credit for her stewardship so far, and for attempting to avoid the dangers of advertising incentives. But the process of capitalism degrading tech products is so predictable that Cory Doctorow coined a now-popular term for it: enshittification. Thats why we need to act now to secure the foundation of this digital future and make it enshittification-proof.Free Our Feeds. There are three parts: First, Free Our Feeds wants to create a nonprofit foundation to govern and protect the AT Protocol, outside of Bluesky the company. We also need to build redundant servers so anyone can leave with their data or build anything they wantregardless of policies set by Bluesky. Finally, we need to spur the development of a whole ecosystem built on this tech with seed money and expertise. Its worth noting that this is not a hostile takeover: Bluesky and Graber recognize the importance of this effort and have signaled their approval. But the point is, this effort cant rely on them. To free us from fickle billionaires, some of the power has to reside outside Bluesky Inc. If we get this right, so much is possible. Not too long ago, the internet was full of builders and people working together: the open web. Email. Podcasts. Wikipedia is one of the best examples a collaborative project to create one of the webs best free, public resources. And the reason we still have it today is the infrastructure built up around it: the nonprofit Wikimedia Foundation protects the project and insulates it from the pressures of capitalism. Whens the last time we collectively built anything as good? We can shift the balance of power and reclaim our social lives from these companies and their billionaires. This an opportunity to bring much more independence, innovation, and local control to our online conversations. We can finally build the Wikipedia of social media, or whatever we want. But we need to act, because the future of the internet cant depend on whether one of the richest men on earth wakes up on the wrong side of the bed. Eli Pariser is author of The Filter Bubble and co-director of New_ Public, a nonprofit R&D lab thats working to reimagine social media. Deepti Doshi is a co-director of New_ Public and was a director at Meta.
    0 Comments ·0 Shares ·52 Views
  • What to expect from Neuralink in 2025
    www.technologyreview.com
    MIT Technology Reviews Whats Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of themhere. In November, a young man named Noland Arbaugh announced hed be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom. The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musks brain-interface company. The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaughs livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore peoples daily ability to roam the web and play games, giving them what the company has called digital freedom. But this is not yet a commercial product. The current studies are small-scalethey are true experiments, explorations of how the device works and how it can be improved. For instance, at some point last year, more than half the electrode-studded threads inserted into Aurbaughs brain retracted, and his control over the device worsened; Neuralink rushed to implement fixes so he could use his remaining electrodes to move the mouse. Neuralink did not reply to emails seeking comment, but here is what our analysis of its public statements leads us to expect from the company in 2025. More patients How many people will get these implants?he posted on X: If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within five years, millions within 10 years. In reality, the actual pace is slowera lot slower. Thats because in a study of a novel device, its typical for the first patients to be staged months apart, to allow time to monitor for problems. Neuralink has publicly announced that two people have received an implant: Arbaugh and a man referred to only as Alex, who received his in July or August. Then, on January 8, Musk disclosed during an online interview that there was now a third person with an implant. Weve got now three patients, three humans with Neuralinks implanted, and they are all working well, Musk said. During 2025, he added, we expect to hopefully do, I dont know, 20 or 30 patients. Barring major setbacks, expect the pace of implants to increasealthough perhaps not as fast as Musk says. In November, Neuralink updated its US trial listing to include space for five volunteers (up from three), and it also opened a trial in Canada with room for six. Considering these two studies only, Neuralink would carry out at least two more implants by the end of 2025 and eight by the end of 2026. However, by opening further international studies, Neuralink could increase the pace of the experiments. Better control So how good is Arbaughs control over the mouse? You can get an idea by trying a game called Webgrid, where you try to click quickly on a moving target. The program translates your speed into a measure of information transfer: bits per second. Neuralink claims Arbaugh reached a rate of over nine bits per second, doubling the old brain-interface record. The median able-bodied user scores around 10 bits per second, according to Neuralink. And yet during his livestream, Arbaugh complained that his mouse control wasnt very good because his model was out of date. It was a reference to how his imagined physical movements get mapped to mouse movements. That mapping degrades over hours and days, and to recalibrate it, he has said, he spends as long as 45 minutes doing a set of retraining tasks on his monitor, such as imagining moving a dot from a center point to the edge of a circle. Noland Arbaugh stops to calibrate during a livestream on X@MODDEDQUAD VIA X Improving the software that sits between Arbaughs brain and the mouse is a big area of focus for Neuralinkone where the company is still experimenting and making significant changes. Among the goals: cutting the recalibration time to a few minutes. We want them to feel like they are in the F1 [Formula One] car, not the minivan, Bliss Chapman, who leads the BCI software team, told the podcaster Lex Fridman last year. Device changes Before Neuralink ever seeks approval to sell its brain interface, it will have to lock in a final device design that can be tested in a pivotal trial involving perhaps 20 to 40 patients, to show it really works as intended. That type of study could itself take a year or two to carry out and hasnt yet been announced. In fact, Neuralink is still tweaking its implant in significant waysfor instance, by trying to increase the number of electrodes or extend the battery life. This month, Musk said the next human tests would be using an upgraded Neuralink device. The company is also still developing the surgical robot, called R1, thats used to implant the device. It functions like a sewing machine: A surgeon uses R1 to thread the electrode wires into peoples brains. According to Neuralinks job listings, improving the R1 robot and making the implant process entirely automatic is a major goal of the company. Thats partly to meet Musks predictions of a future where millions of people have an implant, since there wouldnt be enough neurosurgeons in the world to put them all in manually. We want to get to the point where its one click, Neuralink president Dongjin Seo told Fridman last year. Robot arm Late last year, Neuralink opened a companion study through which it says some of its existing implant volunteers will get to try using their brain activity to control not only a computer mouse but other types of external devices, including an assistive robotic arm. We havent yet seen what Neuralinks robotic arm looks likewhether its a tabletop research device or something that could be attached to a wheelchair and used at home to complete daily tasks. But its clear such a device could be helpful. During Aurbaughs livestream he frequently asked other people to do simple things for him, like brush his hair or put on his hat. Arbaugh demonstrates the use of Imagined Movement Control.@MODDEDQUAD VIA X And using brains to control robots is definitely possiblealthough so far only in a controlled research setting. In tests using a different brain implant, carried out at the University of Pittsburgh in 2012, a paralyzed woman named Jan Scheuermann was able to use a robot arm to stack blocks and plastic cups about as well as a person whod had a severe strokeimpressive, since she couldnt actually move her own limbs. There are several practical obstacles to using a robot arm at home. One is developing a robot thats safe and useful. Another, as noted by Wired, is that the calibration steps to maintain control over an arm that can make 3D movements and grasp objects could be onerous and time consuming. Vision implant In September, Neuralink said it had received breakthrough device designation from the FDA for a version of its implant that could be used to restore limited vision to blind people. The system, which it calls Blindsight, would work by sending electrical impulses directly into a volunteers visual cortex, producing spots of light called phosphenes. If there are enough spots, they can be organized into a simple, pixelated form of vision, as previously demonstrated by academic researchers. The FDA designation is not the same as permission to start the vision study. Instead, its a promise by the agency to speed up review steps, including agreements around what a trial should look like. Right now, its impossible to guess when a Neuralink vision trial could start, but it wont necessarily be this year. More money Neuralink last raised money in 2003, collecting around $325 million from investors in a funding round that valued the company at over $3 billion, according to Pitchbook. Ryan Tanaka, who publishes a podcast about the company, Neura Pod, says he thinks Neuralink will raise more money this year and that the valuation of the private company could triple. Fighting regulators Neuralink has attracted plenty of scrutiny from news reporters, animal-rights campaigners, and even fraud investigators at the Securities and Exchange Commission. Many of the questions surround its treatment of test animals and whether it rushed to try the implant in people. More recently, Musk has started using his X platform to badger and bully heads of state and was named by Donald Trump to co-lead a so-called Department of Government Efficiency, which Musk says will get rid of nonsensical regulations and potentially gut some DC agencies. During 2025, watch for whether Musk uses his digital bullhorn to give health regulators pointed feedback on how theyre handling Neuralink. Other efforts Dont forget that Neuralink isnt the only company working on brain implants. A company called Synchron has one thats inserted into the brain through a blood vessel, which its also testing in human trials of brain control over computers. Other companies, including Paradromics, Precision Neuroscience, and BlackRock Neurotech, are also developing advanced brain-computer interfaces. Special thanks to Ryan Tanaka of Neura Pod for pointing us to Neuralinks public announcements and projections.
    0 Comments ·0 Shares ·55 Views
  • The Download: whats next for Neuralink, and Metas language translation AI
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. What to expect from Neuralink in 2025 In November, a young man named Noland Arbaugh announced hed be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom. The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musks brain-interface company. Arbaughs livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore peoples daily ability to roam the web and play games, giving them what the company has called digital freedom. But this is not yet a commercial product. The current studies are small-scalethey are true experiments, explorations of how the device works and how it can be improved. Read on for our analysis of what to expect from the company in 2025. Antonio Regalado Metas new AI model can translate speech from more than 100 languages Whats new: Meta has released a new AI model that can translate speech from 101 different languages. It represents a step toward real-time, simultaneous interpretation, where words are translated as soon as they come out of someones mouth. Why it matters: Typically, translation models for speech use a multistep approach which can be inefficient, and at each step, errors and mistranslations can creep in. But Metas new model, called SeamlessM4T, enables more direct translation from speech in one language to speech in another. Read the full story. Scott J Mulligan Interest in nuclear power is surging. Is it enough to build new reactors? Lately, the vibes have been good for nuclear power. Public support is building, and public and private funding have made the technology more economical in key markets. Theres also a swell of interest from major companies looking to power their data centers. These shifts have been great for existing nuclear plants. Were seeing efforts to boost their power output, extend the lifetime of old reactors, and even reopen facilities that have shut down. Thats good news for climate action, because nuclear power plants produce consistent electricity with very low greenhouse-gas emissions. I covered all these trends in my latest story, which digs into whats next for nuclear power in 2025 and beyond. But as I spoke with experts, one central question kept coming up for me: Will all of this be enough to actually get new reactors built?Casey Crownhart This article is from The Spark, MIT Technology Reviews weekly climate and energy newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Donald Trump is exploring how to save TikTok An executive order could suspend its ban or sale by up to 90 days. (WP $)+ But questions remain over the legality of such a move. (Axios)+ YouTuber MrBeast has said hes interested in buying the app. (Insider $)+ The depressing truth about TikToks impending ban. (MIT Technology Review)2 Blue Origins New Glenn rocket has made it into space But it lost a booster along the way. (The Verge)3 Angelenos are naming and shaming landlords for illegal price gouging A grassroots Google Sheet is tracking rentals with significant price increases among the wild fires. (Fast Company $)4 How the Trump administration will shake up defense tech Its likely to favor newer players over established firms for lucrative contracts. (FT $)+ Weapons startup Anduril plans to build a $1 billion factory in Ohio. (Axios)+ Palmer Luckey on the Pentagons future of mixed reality. (MIT Technology Review)5 The difference between mistakes made by humans and AIMachines errors are a whole lot weirder, for a start. (IEEE Spectrum) + A new public database lists all the ways AI could go wrong. (MIT Technology Review) 6 The creator economy is bouncing backFunding for creator startups is rising, after two years in the doldrums. (The Information $) 7 Predicting the future of tech is notoriously tough But asking better initial questions is a good place to start. (WSJ $)8 IVF isnt just for combating fertility problems any moreIts becoming a tool for genetic screening before a baby is even born. (The Atlantic $) + Three-parent baby technique could create babies at risk of severe disease. (MIT Technology Review)9 The killer caterpillars could pave the way to better medicine Studying their toxic secretions could help create new drugs more quickly. (Knowable Magazine)10 How to document your life digitally If physical diaries arent for you, there are plenty of smartphone-based options. (NYT $)Quote of the day Americans may only be able to watch as their app rots." Joseph Lorenzo Hall, a technologist at the nonprofit Internet Society, tells Reuters how TikToks complicated network of service providers means that the app could fall apart gradually, rather than all at once, if the proposed US ban goes ahead. The big story How refrigeration ruined fresh food October 2024 Three-quarters of everything in the average American diet passes through the cold chainthe network of warehouses, shipping containers, trucks, display cases, and domestic fridges that keep meat, milk, and more chilled on the journey from farm to fork. As consumers, we put a lot of faith in terms like fresh and natural, but artificial refrigeration has created a blind spot. Weve gotten so good at preserving (and storing) food, that we know more about how to lengthen an apples life span than a humans, and most of us dont give that extraordinary process much thought at all. But all that convenience has come at the expense of diversity and deliciousness. Read the full story. Allison Arieff We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + The biggest and best tours of 2025 look really exciting (especially Oasis!)+ If you love classic mobile phones, you need to check out Aalto Universitys newly launched Nokia Design Archive immediately.+ The one and only Ridley Scott explains how a cigarette inspired that iconic hand-in-wheat shot in Gladiator.+ Set aside your reading goals for the yearyour only aim should be to read the books you really want to.
    0 Comments ·0 Shares ·60 Views
  • Interest in nuclear power is surging. Is it enough to build new reactors?
    www.technologyreview.com
    This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. Lately, the vibes have been good for nuclear power. Public support is building, and public and private funding have made the technology more economical in key markets. Theres also a swell of interest from major companies looking to power their data centers. These shifts have been great for existing nuclear plants. Were seeing efforts to boost their power output, extend the lifetime of old reactors, and even reopen facilities that have shut down. Thats good news for climate action, because nuclear power plants produce consistent electricity with very low greenhouse-gas emissions. I covered all these trends in my latest story, which digs into whats next for nuclear power in 2025 and beyond. But as I spoke with experts, one central question kept coming up for me: Will all of this be enough to actually get new reactors built? To zoom in on some of these trends, lets take a look at the US, which has the largest fleet of nuclear reactors in the world (and the oldest, with an average age of over 42 years). In recent years weve seen a steady improvement in public support for nuclear power in the US. Today, around 56% of Americans support more nuclear power, up from 43% in 2020, according to a Pew Research poll. The economic landscape has also shifted in favor of the technology. The Inflation Reduction Act of 2022 includes tax credits specifically for operating nuclear plants, aimed at keeping them online. Qualifying plants can receive up to $15 per megawatt-hour, provided they meet certain labor requirements. (For context, in 2021, its last full year of operation, Palisades in Michigan generated over 7 million megawatt-hours.) Big Tech has also provided an economic boost for the industrytech giants like Microsoft, Meta, Google, and Amazon are all making deals to get in on nuclear. These developments have made existing (or recently closed) nuclear power plants a hot commodity. Plants that might have been candidates for decommissioning just a few years ago are now candidates for license extension. Plants that have already shut down are seeing a potential second chance at life. Theres also the potential to milk more power out of existing facilities through changes called uprates, which basically allow existing facilities to produce more energy by tweaking existing instruments and power generation systems. The US Nuclear Regulatory Commission has approved uprates totaling six gigawatts over the past two decades. Thats a small but certainly significant fraction of the roughly 97 gigawatts of nuclear on the grid today. Any reactors kept online, reopened, or ramped up spell good news for emissions. Well probably also need new reactors just to maintain the current fleet, since so many reactors are scheduled to be retired in the next couple of decades. Will the enthusiasm for keeping old plants running also translate into building new ones? In much of the world (China being a notable exception), building new nuclear capacity has historically been expensive and slow. Its easy to point at Plant Vogtle in the US: The third and fourth reactors at that facility began construction in 2009. They were originally scheduled to start up in 2016 and 2017, at a cost of around $14 billion. They actually came online in 2023 and 2024, and the total cost of the project was north of $30 billion. Some advanced technology has promised to fix the problems in nuclear power. Small modular reactors could help cut cost and construction times, and next-generation reactors promise safety and efficiency improvements that could translate to cheaper, quicker construction. Realistically, though, getting these first-of-their-kind projects off the ground will still require a lot of money and a sustained commitment to making them happen. The next four years are make or break for advanced nuclear, says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy. There are a few factors that could help the progress weve seen recently in nuclear extend to new builds. For one, public support from the US Department of Energy includes not only tax credits but public loans and grants for demonstration projects, which can be a key stepping stone to commercial plants that generate electricity for the grid. Changes to the regulatory process could also help. The Advance Act, passed in 2024, aims at sprucing up the Nuclear Regulatory Commission (NRC) in the hopes of making the approval process more efficient (currently, it can take up to five years to complete). If you can see the NRC really start to modernize toward a more efficient, effective, and predictable regulator, it really helps the case for a lot of these commercial projects, because the NRC will no longer be seen as this barrier to innovation, says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank. We should start to see changes from that legislation this year, though what happens could depend on the Trump administration. The next few years are crucial for next-generation nuclear technology, and how the industry fares between now and the end of the decade could be very telling when it comes to how big a role this technology plays in our longer-term efforts to decarbonize energy. Now read the rest of The Spark Related reading For more on whats next for nuclear power, check out my latest story. One key trend Im following is efforts to reopen shuttered nuclear plants. Heres how to do it. Kairos Power is working to build molten-salt-cooled reactors, and we named the company to our list of 10 Climate Tech Companies to watch in 2024. Another thing Devastating wildfires have been ravaging Southern California. Heres a roundup of some key stories about the blazes. Strong winds have continued this week, bringing with them the threat of new fires. Heres a page with live updates on the latest. (Washington Post) Officials are scouring the spot where the deadly Palisades fire started to better understand how it was sparked. (New York Times) Climate change didnt directly start the fires, but global warming did contribute to how intensely they burned and how quickly they spread. (Axios) The LA fires show that controlled burns arent a cure-all when it comes to preventing wildfires. (Heatmap News) Seawater is a last resort when it comes to fighting fires, since its corrosive and can harm the environment when dumped on a blaze. (Wall Street Journal) Keeping up with climate US emissions cuts stalled last year, despite strong growth in renewables. The cause: After staying flat or falling for two decades, electricity demand is rising. (New York Times) With Donald Trump set to take office in the US next week, many are looking to state governments as a potential seat of climate action. Heres what to look for in states including Texas, California, and Massachusetts. (Inside Climate News) The US could see as many as 80 new gas-fired power plants built by 2030. The surge comes as demand for power from data centers, including those powering AI, is ballooning. (Financial Times) Global sales of EVs and plug-in hybrids were up 25% in 2024 from the year before. China, the worlds largest EV market, is a major engine behind the growth. (Reuters) A massive plant to produce low-emissions steel could be in trouble. Steelmaker SSAB has pulled out of talks on federal funding for a plant in Mississippi. (Canary Media) Some solar panel companies have turned to door-to-door sales. Things arent always so sunny for those involved. (Wired)
    0 Comments ·0 Shares ·57 Views
  • Metas new AI model can translate speech from more than 100 languages
    www.technologyreview.com
    Meta has released a new AI model that can translate speech from 101 different languages. It represents a step toward real-time, simultaneous interpretation, where words are translated as soon as they come out of someones mouth. Typically, translation models for speech use a multistep approach. First they translate speech into text. Then they translate that text into text in another language. Finally, that translated text is turned into speech in the new language. This method can be inefficient, and at each step, errors and mistranslations can creep in. But Metas new model, called SeamlessM4T, enables more direct translation from speech in one language to speech in another. The model is described in a paper published today in Nature. Seamless can translate text with 23% more accuracy than the top existing models. And although another model, Googles AudioPaLM, can technically translate more languages113 of them, versus 101 for Seamlessit can translate them only into English. SeamlessM4T can translate into 36 other languages. The key is a process called parallel data mining, which finds instances when the sound in a video or audio matches a subtitle in another language from crawled web data. The model learned to associate those sounds in one language with the matching pieces of text in another. This opened up a whole new trove of examples of translations for their model. Meta has done a great job having a breadth of different things they support, like text-to-speech, speech-to-text, even automatic speech recognition, says Chetan Jaiswal, a professor of computer science at Quinnipiac University, who was not involved in the research. The mere number of languages they are supporting is a tremendous achievement. Human translators are still a vital part of the translation process, the researchers say in the paper, because they can grapple with diverse cultural contexts and make sure the same meaning is conveyed from one language into another. This step is important, says Lynne Bowker of the University of Ottawas School of Translation & Interpretation, who didnt work on Seamless. Languages are a reflection of cultures, and cultures have their own ways of knowing things, she says. When it comes to applications like medicine or law, machine translations need to be thoroughly checked by a human, she says. If not, misunderstandings can result. For example, when Google Translate was used to translate public health information about the covid-19 vaccine from the Virginia Department of Health in January 2021, it translated not mandatory in English into not necessary in Spanish, changing the whole meaning of the message. AI models have much more examples to train on in some languages than others. This means current speech-to-speech models may be able to translate a language like Greek into English, where there may be many examples, but cannot translate from Swahili to Greek. The team behind Seamless aimed to solve this problem by pre-training the model on millions of hours of spoken audio in different languages. This pre-training allowed it to recognize general patterns in language, making it easier to process less widely spoken languages because it already had some baseline for what spoken language is supposed to sound like. The system is open-source, which the researchers hope will encourage others to build upon its current capabilities. But some are skeptical of how useful it may be compared with available alternatives. Googles translation model is not as open-source as Seamless, but its way more responsive and fast, and it doesnt cost anything as an academic, says Jaiswal. The most exciting thing about Metas system is that it points to the possibility of instant interpretation across languages in the not-too-distant futurelike the Babel fish in Douglas Adams cult novel The Hitchhiker's Guide to the Galaxy. SeamlessM4T is faster than existing models but still not instant. That said, Meta claims to have a newer version of Seamless thats as fast as human interpreters. While having this kind of delayed translation is okay and useful, I think simultaneous translation will be even more useful, says Kenny Zhu, director of the Arlington Computational Linguistics Lab at the University of Texas at Arlington, who is not affiliated with the new research.
    0 Comments ·0 Shares ·43 Views
  • Fueling the future of digital transformation
    www.technologyreview.com
    In the rapidly evolving landscape of digital innovation, staying adaptable isnt just a strategyits a survival skill. Everybody has a plan until they get punched in the face, says Luis Nio, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson. Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Nio offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves. Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety. "From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant," he says. Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe. "The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes," says Nio. "If a machine reacts in ways you don't expect, people could get hurt, and so there's an extra level of care that needs to happen and that we need to think about as we deploy these technologies." Nio highlights Chevrons efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that AI sits on top of that foundation of strong data management and robust telecommunications capabilities. As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured. Looking ahead, Nio emphasizes the importance of adaptability and collaboration: Transformation is as much about technology as it is about people. With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset. Nio is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy. "My job is to keep an eye on those developments," says Nio, "to make sure that we're managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective." This episode of Business Lab is produced in association with Infosys Cobalt. Full Transcript Megan Tatum: From MIT Technology Review, I'm Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology. Two words for you: enabling innovation. My guest is Luis Nio, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt. Welcome, Luis. Luis Nio: Thank you, Megan. Thank you for having me. Megan: Thank you so much for joining us. Just to set some context, Luis, you've had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy? Luis: Thank you for the question. And you're right, my career has spanned many different areas and geographies in the company. It really feels like I've worked for different companies every time I change roles. Like I said, different functions, organizations, locations I've had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I've seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I've also had corporate function roles. And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron. Over time, I would highlight two things. In central organizations, whether that's functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives. Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there's risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information. I wouldn't want to miss the opportunity to say that both of those, let's call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data. In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let's shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter. I don't know if you know this, Megan, but there's a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he's trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile. Megan: Yeah, absolutely. Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they're sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we're doing things right at the balance. I think that at the end, I think that's a question for leadership to make sure that they have a pulse on our ability to change. Megan: Absolutely, and you've mentioned a few different elements and technologies I'd love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it's working on and what frameworks you've developed for effective adoption as well. Luis: Yeah, absolutely. This is the big one, isn't it? Everybody's talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That's really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here. One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI. Megan: Right. Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you're in an industrial setting, if a machine reacts in ways you don't expect, people could get hurt, and so there's an extra level of care that needs to happen and that we need to think about as we deploy these technologies. AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we're looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting. So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain. In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we're exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don't do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward. There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we're also doing testing in those corporate areas. I can tell you that I've been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions. And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people's skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It's more cost-effective. It's really helpful. Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us. We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you're talking about industrial processes, people's lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place. Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It's great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron's digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well? Luis: Yeah, yeah. That's a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that's not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data. The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions. I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It's really important that people don't fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them. At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change. Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there's also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well? Luis: Yeah, that's a great question and I think you could argue that was the genesis of the digital transformation effort. It's been a journey for us and it's a journey that I think we're not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There's an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs. There's a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn't stopped, that has only increased. And then there's a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have. It's important also that it has created this symbiotic need to grow between AI and the cloud. So you don't have the AI without the cloud, but now you don't have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that's something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they've helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs. Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed. But it is important to remember that you don't want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there. So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives. Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection? Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end. We use sensing capabilities that help us keep workers out of harm's way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirementspersonal protective equipment requirementsand so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area. We're also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it's because of the weather or because they are enclosed, we can use wearables that tell us how fast the person's getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they're safe or if they need to take a break. We have those capabilities now. Going back to emissions tracking and leak detection, I think it's actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response. The way I would explain how we use IoT and AI for safety, whether it's personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it's super abilities. And so if you think of sight normally you would've had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what's going on. If you keep thinking about the human senses, that's sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies. A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it's out, it's really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we're leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies. A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that's just an example of the technologies that we're leveraging in this space. Megan: Wow, that's fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron's broader goal of reducing its carbon footprint? Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that's something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets. My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we've done for a long time. We have decades of experience in carbon capture technologies across the world. One of our larger projects, the Gorgon Project in Australia, I think they've captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there's a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I'm familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint. There are two areas here that I would like to talk about. Hydrogen first. This is another area that we're familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we've been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They're emerging business lines that are young in comparison to the rest of our company. We've been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is. I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies. Megan: I see. And I wonder if you could share a bit more actually about Chevron's role in driving sustainable business innovation. I'm thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy. Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I'll get into that in a second. But happy to talk about this, again with the caveat that it's not my area of expertise. Megan: Of course. Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there's also soybeans, there's animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized. What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we're very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we're able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers. I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That's where they're located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable. Megan: Wow. Sounds like there's some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron's core business and the energy sector as a whole as well? Luis: Yeah, that's a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it's going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people's productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they're likely to happen. This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I'm going to say humankind, is only beginning to explore. There's another area that I didn't talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We're working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we're working really hard in this space. Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential. My job is to keep an eye on those developments, to make sure that we're managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective. Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis. Luis: Great to talk to you. Megan: Thank you so much. That was Luis Nio, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England. That's it for this episode of Business Lab. I'm Megan Tatum, I'm your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening.
    0 Comments ·0 Shares ·52 Views
  • The Download: Chinas marine ranches, and fast-learning robots
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. China wants to restore the sea with high-tech marine ranches A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex. Genghai is in fact an unusual tourist destination, one that breeds 200,000 high-quality marine fish each year. The vast majority are released into the ocean as part of a process known as marine ranching.The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.Matthew Ponsford This story is from the latest print edition of MIT Technology Reviewits all about the exciting breakthroughs happening in the world right now. If you dont already, subscribe to receive future copies. Fast-learning robots: 10 Breakthrough Technologies 2025 Generative AI is causing a paradigm shift in how robots are trained. Its now clear how we might finally build the sort of truly capable robots that have for decades remained the stuff of science fiction. A few years ago, roboticists began marveling at the progress being made in large language models. Makers of those models could feed them massive amounts of textbooks, poems, manualsand then fine-tune them to generate text based on prompts. Its one thing to use AI to create sentences on a screen, but another thing entirely to use it to coach a physical robot in how to move about and do useful things. Now, roboticists have made major breakthroughs in that pursuit. Read the full story. James O'Donnell Fast-learning robots is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 US regulators are suing Elon Musk| For allegedly violating securities law when he bought Twitter in 2022. (NYT $)+ The case claims that Musk continued to buy shares at artificially low prices. (FT $)+ Musk is unlikely to take it lying down. (Politico)2 SpaceX has launched two private missions to the moon Falling debris from the rockets has forced Qantas to delay flights. (The Guardian)+ The airline has asked for more precise warnings around future launches. (Semafor)+ Space startups are on course for a funding windfall. (Reuters)+ Whats next for NASAs giant moon rocket? (MIT Technology Review)3 Home security cameras are capturing homes burning down in LA Residents have remotely tuned into live footage of their own homes burning. (WP $)+ Californias water scarcity is only going to get worse. (Vox)+ How Los Angeles can rebuild in the wake of the devastation. (The Atlantic $) 4 ChatGPT is about to get much more personal Including reminding you about walking the dog. (Bloomberg $)5 Inside the $30 million campaign to liberate social media from billionaires Free Our Feeds wants to restructure platforms around open-source tech. (Insider $)6 How to avoid getting sick right now The Atlantic $) + But coughs and sneezes could be the least of our problems. (The Guardian)7 The US and China are still collaborating on AI researchDespite rising tensions between the countries. (Rest of World) 8 These startups think they have the solution to lonelinessMaking friends isnt always easy, but these companies have some ideas. (NY Mag $) 9 Here are just some of the ways the universe could end Dont say I didnt warn you. (Ars Technica)+ But at least Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)10 AI is inventing impossible languages They could help us learn more about how humans learn. (Quanta Magazine)+ These impossible instruments could change the future of music. (MIT Technology Review) Quote of the day If you can get away with it when its front-page news, why bother to comply at all? Marc Fagel, a former director of the SECs San Francisco office, suggests the agencys decision to sue Elon Musk is intended as a deterrent to others, the Wall Street Journal reports. The big story I took an international trip with my frozen eggs to learn about the fertility industry September 2022Anna Louie Sussman Like me, my eggs were flying economy class. They were ensconced in a cryogenic storage flask packed into a metal suitcase next to Paolo, the courier overseeing their passage from a fertility clinic in Bologna, Italy, to the clinic in Madrid, Spain, where I would be undergoing in vitro fertilization.The shipping of gametes and embryos around the world is a growing part of a booming global fertility sector. As people have children later in life, the need for fertility treatment increases each year.After paying for storage costs for six and four years, respectively, at 40 I was ready to try to get pregnant. Transporting the Bolognese batch served to literally put all my eggs in one basket. Read the full story.We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + We need to save the worlds largest sea star!+ Maybe our little corner of the universe is more special than weve been led to believe after all.+ How the worlds leading anti-anxiety coach overcame her own anxiety.+ Heres how to keep your eyes on the prize in 2025and beyond!
    0 Comments ·0 Shares ·30 Views
  • Training robots in the AI-powered industrial metaverse
    www.technologyreview.com
    Imagine the bustling floors of tomorrows manufacturing plant: Robots, well-versed in multiple disciplines through adaptive AI education, work seamlessly and safely alongside human counterparts. These robots can transition effortlessly between tasksfrom assembling intricate electronic components to handling complex machinery assembly. Each robot's unique education enables it to predict maintenance needs, optimize energy consumption, and innovate processes on the fly, dictated by real-time data analyses and learned experiences in their digital worlds. Training for robots like this will happen in a virtual school, a meticulously simulated environment within the industrial metaverse. Here, robots learn complex skills on accelerated timeframes, acquiring in hours what might take humans months or even years. Beyond traditional programming Training for industrial robots was once like a traditional school: rigid, predictable, and limited to practicing the same tasks over and over. But now were at the threshold of the next era. Robots can learn in virtual classroomsimmersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world can provide an almost limitless training ground that mirrors real factories, warehouses, and production lines, allowing robots to practice tasks, encounter challenges, and develop problem-solving skills. What once took days or even weeks of real-world programming, with engineers painstakingly adjusting commands to get the robot to perform one simple task, can now be learned in hours in virtual spaces. This approach, known as simulation to reality (Sim2Real), blends virtual training with real-world application, bridging the gap between simulated learning and actual performance. Although the industrial metaverse is still in its early stages, its potential to reshape robotic training is clear, and these new ways of upskilling robots can enable unprecedented flexibility. Italian automation provider EPF found that AI shifted the companys entire approach to developing robots. We changed our development strategy from designing entire solutions from scratch to developing modular, flexible components that could be combined to create complete solutions, allowing for greater coherence and adaptability across different sectors, says EPFs chairman and CEO Franco Filippi. Learning by doing AI models gain power when trained on vast amounts of data, such as large sets of labeled examples, learning categories, or classes by trial and error. In robotics, however, this approach would require hundreds of hours of robot time and human oversight to train a single task. Even the simplest of instructions, like grab a bottle, for example, could result in many varied outcomes depending on the bottles shape, color, and environment. Training then becomes a monotonous loop that yields little significant progress for the time invested. Building AI models that can generalize and then successfully complete a task regardless of the environment is key for advancing robotics. Researchers from New York University, Meta, and Hello Robot have introduced robot utility models that achieve a 90% success rate in performing basic tasks across unfamiliar environments without additional training. Large language models are used in combination with computer vision to provide continuous feedback to the robot on whether it has successfully completed the task. This feedback loop accelerates the learning process by combining multiple AI techniquesand avoids repetitive training cycles. Robotics companies are now implementing advanced perception systems capable of training and generalizing across tasks and domains. For example, EPF worked with Siemens to integrate visual AI and object recognition into its robotics to create solutions that can adapt to varying product geometries and environmental conditions without mechanical reconfiguration. Learning by imagining Scarcity of training data is a constraint for AI, especially in robotics. However, innovations that use digital twins and synthetic data to train robots have significantly advanced on previously costly approaches. For example, Siemens SIMATIC Robot Pick AI expands on this vision of adaptability, transforming standard industrial robotsonce limited to rigid, repetitive tasksinto complex machines. Trained on synthetic datavirtual simulations of shapes, materials, and environmentsthe AI prepares robots to handle unpredictable tasks, like picking unknown items from chaotic bins, with over 98% accuracy. When mistakes happen, the system learns, improving through real-world feedback. Crucially, this isnt just a one-robot fix. Software updates scale across entire fleets, upgrading robots to work more flexibly and meet the rising demand for adaptive production. Another example is the robotics firm ANYbotics, which generates 3D models of industrial environments that function as digital twins of real environments. Operational data, such as temperature, pressure, and flow rates, are integrated to create virtual replicas of physical facilities where robots can train. An energy plant, for example, can use its site plans to generate simulations of inspection tasks it needs robots to perform in its facilities. This speeds the robots training and deployment, allowing them to perform successfully with minimal on-site setup. Simulation also allows for the near-costless multiplication of robots for training. In simulation, we can create thousands of virtual robots to practice tasks and optimize their behavior. This allows us to accelerate training time and share knowledge between robots, says Pter Fankhauser, CEO and co-founder of ANYbotics. Because robots need to understand their environment regardless of orientation or lighting, ANYbotics and partner Digica created a method of generating thousands of synthetic images for robot training. By removing the painstaking work of collecting huge numbers of real images from the shop floor, the time needed to teach robots what they need to know is drastically reduced. Similarly, Siemens leverages synthetic data to generate simulated environments to train and validate AI models digitally before deployment into physical products. By using synthetic data, we create variations in object orientation, lighting, and other factors to ensure the AI adapts well across different conditions, says Vincenzo De Paola, project lead at Siemens. We simulate everything from how the pieces are oriented to lighting conditions and shadows. This allows the model to train under diverse scenarios, improving its ability to adapt and respond accurately in the real world. Digital twins and synthetic data have proven powerful antidotes to data scarcity and costly robot training. Robots that train in artificial environments can be prepared quickly and inexpensively for wide varieties of visual possibilities and scenarios they may encounter in the real world. We validate our models in this simulated environment before deploying them physically, says De Paola. This approach allows us to identify any potential issues early and refine the model with minimal cost and time. This technologys impact can extend beyond initial robot training. If the robots real-world performance data is used to update its digital twin and analyze potential optimizations, it can create a dynamic cycle of improvement to systematically enhance the robots learning, capabilities, and performance over time. The well-educated robot at work With AI and simulation powering a new era in robot training, organizations will reap the benefits. Digital twins allow companies to deploy advanced robotics with dramatically reduced setup times, and the enhanced adaptability of AI-powered vision systems makes it easier for companies to alter product lines in response to changing market demands. The new ways of schooling robots are transforming investment in the field by also reducing risk. Its a game-changer, says De Paola. Our clients can now offer AI-powered robotics solutions as services, backed by data and validated models. This gives them confidence when presenting their solutions to customers, knowing that the AI has been tested extensively in simulated environments before going live. Filippi envisions this flexibility enabling todays robots to make tomorrows products. The need in one or two years time will be for processing new products that are not known today. With digital twins and this new data environment, it is possible to design today a machine for products that are not known yet, says Filippi. Fankhauser takes this idea a step further. I expect our robots to become so intelligent that they can independently generate their own missions based on the knowledge accumulated from digital twins, he says. Today, a human still guides the robot initially, but in the future, theyll have the autonomy to identify tasks themselves. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
    0 Comments ·0 Shares ·47 Views
  • The Download: the future of nuclear power, and fact checking Mark Zuckerberg
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Whats next for nuclear power While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology. As electricity demand rises around the world for everything from electric vehicles to data centers, theres renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase. Heres what to expect next for the industry.Casey Crownhart This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Mark Zuckerberg and the power of the media On Tuesday last week, Meta CEO Mark Zuckerberg announced that Meta is done with fact checking in the US, that it will roll back restrictions on speech, and is going to start showing people more tailored political content in their feeds. While the end of fact checking has gotten most of the attention, the changes to its hateful speech policy are also notable. Zuckerbergwhose previous self-acknowledged mistakes include the Cambridge Analytica data scandal, and helping to fuel a genocide in Myanmarpresented Facebooks history of fact-checking and content moderation as something he was pressured into doing by the government and media. The reality, of course, is that these were his decisions. He famously calls the shots, and always has. Read the full story. Mat Honan This story first appeared in The Debrief, providing a weekly take on the tech news that really matters and links to stories we loveas well as the occasional recommendation.Sign up to receive it in your inbox every Friday. Heres our forecast for AI this year In December, our small but mighty AI reporting team was asked by our editors to make a prediction: Whats coming next for AI? As we look ahead, certain things are a given. We know that agentsAI models that do more than just converse with you and can actually go off and complete tasks for youare the focus of many AI companies right now. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. However, the other predictions were not so clear-cut. Read the full story. James O'Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. To witness the fallout from the AI teams lively debates (and hear more about what didnt make the list), you can join our upcoming LinkedIn Live this Thursday, January 16 at 12.30pm ET. James will be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 China is considering selling TikTok to Elon Musk But its unclear how likely an outcome that really is. (Bloomberg $)+ Its certainly one way of allowing TikTok to remain in the US. (WSJ $)+ For what its worth, TikTok has dismissed the report as pure fiction. (Variety $)+ Xiaohongshu, also known as RedNote, is dealing with an influx of American users. (WP $)2 Amazon drivers are still delivering packages amid LA fires They're dropping off parcels even after neighborhoods have been instructed to evacuate. (404 Media)3 Alexa is getting a generative AI makeoverAmazon is racing to turn its digital assistant into an AI agent. (FT $) + What are AI agents? (MIT Technology Review)4 Animal manure is a major climate problem Unfortunately, turning it into energy is easier said than done. (Vox)+ How poop could help feed the planet. (MIT Technology Review) 5 Power lines caused many of Californias worst fires Thousands of blazes have been traced back to power infrastructure in recent decades. (NYT $)+ Why some homes manage to withstand wildfires. (Bloomberg $)+ The quest to build wildfire-resistant homes. (MIT Technology Review)6 Barcelona is a hotbed of spyware startups Researchers are increasingly concerned about its creep across Europe. (TechCrunch)7 Mastodons founder doesnt want to follow in Mark Zuckerbergs footstepsEugen Rochko has restructured the company to ensure it could never be controlled by a single individual. (Ars Technica) + Hes made it clear he doesnt want to end up like Elon Musk, either. (Engadget)8 Spare a thought for this Welsh would-be crypto millionaireHis 11-year quest to recover an old hard drive has come to a disappointing end. (Wired $) 9 The unbearable banality of internet lexicon Its giving nonsense. (The Atlantic $)10 You never know whether youll get to see the northern lights or not AI could help us to predict when theyll occur more accurately. (Vice)+ Digital pictures make the lights look much more defined than they actually are. (NYT $)Quote of the day Cutting fact checkers from social platforms is like disbanding your fire department. Alan Duke, co-founder of fact-checking outlet Lead Stories, criticizes Metas decision to ax its US-based fact checkers as the groups attempt to slow viral misinformation spreading about the wildfires in California, CNN reports. The big story The world is moving closer to a new cold war fought with authoritarian tech September 2022Despite President Bidens assurances that the US is not seeking a new cold war, one is brewing between the worlds autocracies and democraciesand technology is fueling it. Authoritarian states are following Chinas lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.And while democracies also use massive amounts of surveillance technology, its the tech trade relationships between authoritarian countries thats enabling the rise of digitally enabled social control. Read the full story.Tate Ryan-Mosley We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Before indie sleaze, there was DIY counterculture site Buddyhead.+ Did you know black holes dont actually suck anything in at all?+ Science fiction is stuck in a loop, and cant seem to break its fixation with cyberpunk.+ Every now and again, TV produces a perfect episode. Heres eight of them.
    0 Comments ·0 Shares ·55 Views
  • Whats next for nuclear power
    www.technologyreview.com
    MIT Technology Reviews Whats Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of themhere. While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology. As electricity demand rises around the world for everything from electric vehicles to data centers, theres renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase. Thats significant because nuclear power promises a steady source of electricity as climate change pushes global temperatures to new heights and energy demand surges around the world. Heres what to expect next for the industry. A global patchwork The past two years have seen a new commitment to nuclear power around the globe, including an agreement at the UN climate talks that 31 countries pledged to triple global nuclear energy capacity by 2050. However, the prospects for the nuclear industry differ depending on where you look. The US is currently home to the highest number of operational nuclear reactors in the world. If its specific capacity were to triple, that would mean adding a somewhat staggering 200 gigawatts of new nuclear energy capacity to the current total of roughly 100 gigawatts. And thats in addition to replacing any expected retirements from a relatively old fleet. But the country has come to something of a stall. A new reactor at the Vogtle plant in Georgia came online last year (following significant delays and cost overruns), but there are no major conventional reactors under construction or in review by regulators in the US now. This year also brings an uncertain atmosphere for nuclear power in the US as the incoming Trump administration takes office. While the technology tends to have wide political support, its possible that policies like tariffs could affect the industry by increasing the cost of building materials like steel, says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy. Globally, most reactors under construction or in planning phases are in Asia, and growth in China is particularly impressive. The countrys first nuclear power plant connected to the grid in 1991, and in just a few decades it has built the third-largest fleet in the world, after only France and the US. China has four large reactors likely to come online this year, and another handful are scheduled for commissioning in 2026. This year will see both Bangladesh and Turkey start up their first nuclear reactors. Egypt also has its first nuclear plant under construction, though its not expected to undergo commissioning for several years. Advancing along Commercial nuclear reactors on the grid today, and most of those currently under construction, generally follow a similar blueprint: The fuel that powers the reactor is low-enriched uranium, and water is used as a coolant to control the temperature inside. But newer, advanced reactors are inching closer to commercial use. A wide range of these so-called Generation IV reactors are in development around the world, all deviating from the current blueprint in one way or another in an attempt to improve safety, efficiency, or both. Some use molten salt or a metal like lead as a coolant, while others use a more enriched version of uranium as a fuel. Often, theres a mix-and-match approach with variations on the fuel type and cooling methods. The next couple of years will be crucial for advanced nuclear technology as proposals and designs move toward the building process. Were watching paper reactors turn into real reactors, says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank. Much of the funding and industrial activity in advanced reactors is centered in the US, where several companies are close to demonstrating their technology. Kairos Power is building reactors cooled by molten salt, specifically a fluorine-containing material called Flibe. The company received a construction permit from the US Nuclear Regulatory Commission (NRC) for its first demonstration reactor in late 2023, and a second permit for another plant in late 2024. Construction will take place on both facilities over the next few years, and the plan is to complete the first demonstration facility in 2027. TerraPower is another US-based company working on Gen IV reactors, though the design for its Natrium reactor uses liquid sodium as a coolant. The company is taking a slightly different approach to construction, too: by separating the nuclear and non-nuclear portions of the facility, it was able to break ground on part of its site in June of 2024. Its still waiting for construction approval from the NRC to begin work on the nuclear side, which the company expects to do by 2026. A US Department of Defense project could be the first in-progress Gen IV reactor to generate electricity, though itll be at a very small scale. Project Pele is a transportable microreactor being manufactured by BWXT Advanced Technologies. Assembly is set to begin early this year, with transportation to the final site at Idaho National Lab expected in 2026. Advanced reactors certainly arent limited to the US. Even as China is quickly building conventional reactors, the country is starting to make waves in a range of advanced technologies as well. Much of the focus is on high-temperature gas-cooled reactors, says Lorenzo Vergari, an assistant professor at the University of Illinois Urbana-Champaign. These reactors use helium gas as a coolant and reach temperatures over 1,500 C, much higher than other designs. Chinas first commercial demonstration reactor of this type came online in late 2023, and a handful of larger reactors that employ the technology are currently in planning phases or under construction. Squeezing capacity It will take years, or even decades, for even the farthest-along advanced reactor projects to truly pay off with large amounts of electricity on the grid. So amid growing global electricity demand around the world, theres renewed interest in getting as much power out of existing nuclear plants as possible. One trend thats taken off in countries with relatively old nuclear fleets is license extension. While many plants built in the 20th century were originally licensed to run for 40 years, theres no reason many of them cant run for longer if theyre properly maintained and some equipment is replaced. Regulators in the US have granted 20-year extensions to much of the fleet, bringing the expected lifetime of many to 60 years. A handful of reactors have seen their licenses extended even beyond that, to 80 years. Countries including France and Spain have also recently extended licenses of operating reactors beyond their 40-year initial lifetimes. Such extensions are likely to continue, and the next few years could see more reactors in the US relicensed for up to 80-year lifetimes. In addition, theres interest in reopening shuttered plants, particularly those that have shut down recently for economic reasons. Palisades Nuclear Plant in Michigan is the target of one such effort, and the project secured a $1.52 billion loan from the US Department of Energy to help with the costs of reviving it. Holtec, the plants owner and operator, is aiming to have the facility back online in 2025. However, the NRC has reported possible damage to some of the equipment at the plant, specifically the steam generators. Depending on the extent of the repairs needed, the additional cost could potentially make reopening uneconomical, White says. A reactor at the former Three Mile Island Nuclear Facility is another target. The sites owner says the reactor could be running again by 2028, though battles over connecting the plant to the grid could play out in the coming year or so. Finally, the owners of the Duane Arnold Energy Center in Iowa are reportedly considering reopening the nuclear plant, which shut down in 2020. Big Techs big appetite One of the factors driving the rising appetite for nuclear power is the stunning growth of AI, which relies on data centers requiring a huge amount of energy. Last year brought new interest from tech giants looking to nuclear as a potential solution to the AI power crunch. Microsoft had a major hand in plans to reopen the reactor at Three Mile Islandthe company signed a deal in 2024 to purchase power from the facility if its able to reopen. And thats just the beginning. Google signed a deal with Kairos Power in October 2024 that would see the startup build up to 500 megawatts worth of power plants by 2035, with Google purchasing the energy. Amazon went one step further than these deals, investing directly in X-energy, a company building small modular reactors. The money will directly fund the development, licensing, and construction of a project in Washington. Funding from big tech companies could be a major help in keeping existing reactors running and getting advanced projects off the ground, but many of these commitments so far are vague, says Good Energy Collectives Lovering. Major milestones to watch for include big financial commitments, contracts signed, and applications submitted to regulators, she says. Nuclear had an incredible 2024, probably the most exciting year for nuclear in many decades, says Staffan Qvist, a nuclear engineer and CEO of Quantified Carbon, an international consultancy focused on decarbonizing energy and industry. Deploying it at the scale required will be a big challenge, but interest is ratcheting up. As he puts it, Theres a big world out there hungry for power.
    0 Comments ·0 Shares ·63 Views
  • Heres our forecast for AI this year
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In December, our small but mighty AI reporting team was asked by our editors to make a prediction: Whats coming next for AI? In 2024, AI contributed both to Nobel Prizewinning chemistry breakthroughs and a mountain of cheaply made content that few people asked for but that nonetheless flooded the internet. Take AI-generated Shrimp Jesus images, among other examples. There was also a spike in greenhouse-gas emissions last year that can be attributed partly to the surge in energy-intensive AI. Our team got to thinking about how all of this will shake out in the year to come. As we look ahead, certain things are a given. We know that agentsAI models that do more than just converse with you and can actually go off and complete tasks for youare the focus of many AI companies right now. Building them will raise lots of privacy questions about how much of our data and preferences were willing to give up in exchange for tools that will (allegedly) save us time. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. We instead wanted to focus on less obvious predictions. Mine were about how AI companies that previously shunned work in defense and national security might be tempted this year by contracts from the Pentagon, and how Donald Trumps attitudes toward China could escalate the global race for the best semiconductors. Read the full list. Whats not evident in that story is that the other predictions were not so clear-cut. Arguments ensued about whether or not 2025 will be the year of intimate relationships with chatbots, AI throuples, or traumatic AI breakups. To witness the fallout from our teams lively debates (and hear more about what didnt make the list), you can join our upcoming LinkedIn Live this Thursday, January 16. Ill be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. There are a couple other things Ill be watching closely in 2025. One is how little the major AI playersnamely OpenAI, Microsoft, and Googleare disclosing about the environmental burden of their models. Lots of evidence suggests that asking an AI model like ChatGPT about knowable facts, like the capital of Mexico, consumes much more energy (and releases far more emissions) than simply asking a search engine. Nonetheless, OpenAIs Sam Altman in recent interviews has spoken positively about the idea of ChatGPT replacing the googling that weve all learned to do in the past two decades. Its already happening, in fact. The environmental cost of all this will be top of mind for me in 2025, as will the possible cultural cost. We will go from searching for information by clicking links and (hopefully) evaluating sources to simply reading the responses that AI search engines serve up for us. As our editor in chief, Mat Honan, said in his piece on the subject, Who wants to have to learn when you can just know? Now read the rest of The Algorithm Deeper Learning Whats next for our privacy? The US Federal Trade Commission has taken a number of enforcement actions against data brokers, some of which have tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent. Though limited in nature, these actions may offer some new and improved protections for Americans personal information. Why it matters: A consensus is growing that Americans need better privacy protectionsand that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. Unfortunately, thats not going to happen anytime soon. Enforcement actions from agencies like the FTC might be the next best thing in the meantime. Read more in Eileen Guos excellent story here. Bits and Bytes Meta trained its AI on a notorious piracy database New court records, Wired reports, reveal that Meta used a notorious so-called shadow library of pirated books that originated in Russia to train its generative AI models. (Wired) OpenAIs top reasoning model struggles with the NYT Connections game The game requires players to identify how groups of words are related. OpenAIs o1 reasoning model had a hard time. (Mind Matters) Anthropics chief scientist on 5 ways agents will be even better in 2025 The AI company Anthropic is now worth $60 billion. The companys cofounder and chief scientist, Jared Kaplan, shared how AI agents will develop in the coming year. (MIT Technology Review) A New York legislator attempts to regulate AI with a new bill This year, a high-profile bill in California to regulate the AI industry was vetoed by Governor Gavin Newsom. Now, a legislator in New York is trying to revive the effort in his own state. (MIT Technology Review)
    0 Comments ·0 Shares ·60 Views
  • Mark Zuckerberg and the power of the media
    www.technologyreview.com
    This article first appeared in The Debrief,MIT Technology Reviewsweekly newsletter from our editor in chief Mat Honan. To receive it in your inbox every Friday, sign up here. On Tuesday last week, Meta CEO Mark Zuckerberg released a blog post and video titled More Speech and Fewer Mistakes. Zuckerbergwhose previous self-acknowledged mistakes includethe Cambridge Analytica data scandal, allowinga militia to put out a call to armson Facebook that presaged two killings in Wisconsin, and helping tofuel a genocide in Myanmarannounced that Meta is done with fact checking in the US, that it will roll back restrictions on speech, and is going to start showing people more tailored political content in their feeds. I started building social media to give people a voice, he said while wearing a$900,000 wristwatch. While the end of fact checking has gotten most of the attention, thechanges to its hateful speech policyare also notable. Amongother things, the company will now allow people to call transgender people it, or to argue that women are property, or to claim homosexuality is a mental illness. (Thiswent over predictably wellwith LGBTQ employees at Meta.) Meanwhile, thanks to that more personalized approach to political content, it looks likepolarizationis back on the menu, boys. Zuckerbergs announcement was one of the most cynical displays of revisionist history I hope Ill ever see. As very many people have pointed out, it seems to be little more than an effort to curry favor with the incoming Trump administrationcomplete with a roll out onFox and Friends. Ill leave it to others right now to parse the specific political implications here (and many people are certainly doing so). Rather, what struck me as so cynical was the way Zuckerberg presented Facebooks history of fact-checking and content moderation as something he was pressured into doing by the government and media. The reality, of course, is that these were his decisions. He structured Meta so thathe has near total control over it. He famously calls the shots,and always has. Yet in Tuesdays announcement, Zuckerberg tries to blame others for the policies he himself instituted and endorsed. Governments and legacy media have pushed to censor more and more, he said.He went on: After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US. While Im not here to defend Metas fact checking system, I never thought it was particularly useful or effective, lets get into the claims that it was done at the behest of the government and legacy media. To start: The US government has never taken any meaningful enforcement actions against Meta whatsoever, and definitely nothing meaningful related to misinformation. Full stop. End of story. Call it a day. Sure, there have beenfinesandsettlements, but for a company the size of Meta, these were mosquitos to be slapped away. Perhaps more significantly, there isan FTC antitrust case working its way through the court, but it again has nothing to do with censorship or fact-checking. And when it comes to the media, consider the real power dynamics at play. Meta, with a current market cap of $1.54 trillion, is worth more than the combined value of the Walt Disney Company (which owns ABC news), Comcast (NBC), Paramount (CBS), Warner Bros (CNN), the New York Times Company, and Fox Corp (Fox News). In fact, Zuckerbergsestimated personal net worthis greater than the market cap of any of those single companies. Meanwhile, Metas audience completely dwarfs that of any legacy media company. According to the tech giant, itenjoys some 3.29 billion daily active users. Daily! And as the company has repeatedly shown, including in this weeks announcements, it is more than willing to twiddle its knobs to control what that audience sees from the legacy media. As a result, publishers have long bent the knee to Meta to try and get even slivers of that audience. Remember thepivot to video? OrInstant Articles? Media has spent more than a decade now trying to respond or get ahead of what Facebook says it wants to feature, only for it to change its mind and throttle traffic. The notion that publishers have any leverage whatsoever over Meta is preposterous. I think its useful to go back and look at how the company got here. Once upon a time Twitter was an actual threat to Facebooks business. After the 2012 election, for which Twitter was central and Facebook was an afterthought, Zuckerberg and company went hard after news. It created share buttons so people could easily drop content from around the Web into their feeds. By 2014, Zuckerbergwas saying he wanted it to be the perfect personalized newspaper for everyone in the world. But there were consequences to this. By 2015, it had a fake news epidemic on its hands,which it was well aware of. By the time the election rolled around in 2016, Macedonian teens hadfamously turned fake news into an arbitrage play, creating bogus pro-Trump news stories expressly to take advantage of the combination of Facebook traffic and Google AdSense dollars. Following the 2016 election, this allblew up in Facebooks face. And in December of that year,it announced it would begin partnering with fact checkers. A year later, Zuckerberg went on to say the issue of misinformation was too important an issue to be dismissive. Until, apparently, right now. Zuckerberg elided all this inconvenient history. But lets be real. No one forced him to hire fact checkers. No one was in a position to even truly pressure him to do so. If that were the case, he would not now be in a position to fire them from behind a desk wearing his $900,000 watch. He made the very choices which he now seeks to shirk responsibility for. But heres the thing, people already know Mark Zuckerberg too well for this transparent sucking up to be effective. Republicans already hate Zuck. Sen. Lindsey Graham has accused him of having blood on his hands. Sen. Josh Hawleyforced him to make an awkward apologyto the families of children harmed on his platform. Sen. Ted Cruz has, onmultiple occasions,torn into him. Trump famouslythreatened to throw him in prison. But so too do Democrats. Sen.Elizabeth Warren,Sen. Bernie Sanders, andAOChave all ripped him. And among the general public, hes bothless popularthanTrumpand more disliked thanJoe Biden. He loses on both counts toElon Musk.Tuesdays announcement ultimately seems little more than pandering foran audience that will never accept him. And while it may not be successful at winning MAGA over, at least the shamelessness and ignoring all past precedent is fully in character. After all, lets remember what Mark Zuckerbergwas busy doingin 2017: Image: Mark Zuckerberg Instagram Now read the rest of The Debrief The News NVIDIA CEO Jensen Huangs remarks about quantum computingcaused quantum stocks to plummet. See our predictionsfor whats coming for AI in 2025. Heres what the US is doing toprepare for a bird flu pandemic. New York state will try to passan AI bill similar to the one that died in California. EVs are projected to bemore than 50 percent of auto sales in China next year, 10 years ahead of targets. The Chat Every week, I talk to one ofMIT Technology Reviews journalists to go behind the scenes of a story they are working on. But this week, I turned the tables a bit and asked some of our editors to grill me aboutmy recent story on the rise of generative search.Charlotte Jee:What makes you feel so sure that AI search is going to take off? Mat:I just dont think theres any going back. There are definitely problems with itit can be wild with inaccuracies when it cobbles those answers together. But I think, for the most part it is, to refer to my old colleague Rob Capps phenomenal essay,good enough. And I think thats what usually wins the day. Easy answers that are good enough. Maybe thats a sad statement, but I think its true. Will Douglas Heaven:For years I've been asked if I think AI will take away my job and I always scoffed at the idea. Now I'm not so sure. I still don't think AI is about to do my job exactly. But I think it might destroy the business model that makes my job exist. And that's entirely down to this reinvention of search. As a journalistand editor of the magazine that pays my billshow worried are you? What can youwedo about it? Mat:Is this a trap? This feels like a trap, Will. Im going to give you two answers here. I think we, as inMIT Technology Review, are relatively insulated here. Were a subscription business. Were less reliant on traffic than most. Were also technology wonks, who tend to go deeper than what you might find in most tech pubs, which I think plays to our benefit. But I am worried about it and I do think it will be a problem for us, and for others. One thing Rand Fishkin,who has long studied zero-click searchesat SparkToro, said to me that wound up getting cut from my story was that brands needed to think more and more about how to build brand awareness. You can do that, for example, by being oft-cited in these models, by being seen as a reliable source. Hopefully, when people ask a question and see us as the expert the model is leaning on, that helps us build our brand and reputation. And maybe they become a readers. Thats a lot more leaps than a link out, obviously. But as he also said to me, if your business model is built on search referralsand for a lot of publishers that is definitely the caseyoure in trouble. Will:Is Google going to survive as a verb? If not, what are we going to call this new activity? Mat:I kinda feel like it is already dying. This is anecdotal, but my kids and all their friends almost exclusively use the phrase search up. As in search up George Washington or search up a pizza dough recipe. Often its followed by a platform, search up Charli XCX on Spotify. We live in California. What floored me was when I heard kids in New Hampshire and Georgia using the exact same phrase. But also I feel like were just going into a more conversational mode here. Maybe we dont call it anything. James ODonnell:I found myself highlighting this line from your piece: "Who wants to have to learn when you can just know?" Part of me thinks the process of finding information with AI search is pretty niceit can allow you to just follow your own curiosity a bit more than traditional search. But I also wonder how the meaning of research may change. Doesn't the process of "digging" do something for us and our minds that AI search will eliminate? Mat: Oh, this occurred to me too! I asked about it in one of my conversations with Google in fact. Blake Montgomeryhas a fantastic essay on this very thing. He talks about how he cant navigate without Google Maps, cant meet guys without Grindr, and wonders what effect ChatGPT will have on him. If you have not previously, you should read it. Niall Firth:How much do you use AI search yourself? Do you feel conflicted about it? Mat:I use it quite a bit. I find myself crafting queries for Google that I think will generate an AI Overview in fact. And I use ChatGPT a lot as well. I like being able to ask a long, complicated question, and I find that it often does a better job of getting at the heart of what Im looking for especially when Im looking for something very specificbecause it can suss out the intent along with the key words and phrases. For example, for the story above I asked What did Mark Zuckerberg say about misinformation and harmful content in 2016 and 2017? Ignore any news articles from the previous few days and focus only on his remarks in 2016 and 2017. The top traditional Google result for that querywas this storythat I would have wanted specifically excluded. It also coughed up several others from the last few days in the top results. But ChatGPT was able to understand my intent and helped me find the older source material. And yes, I feel conflicted. Both because I worry about its economic impact on publishers and Im well aware that theres a lot of junk in there. Its also just sort of an unpopular opinion. Sometimes it feels a bit like smoking, but I do it anyway. The Recommendation Most of the time, the recommendation is for something positive that I think people will enjoy. A song. A book. An app. Etc. This week though Im going to suggest you take a look at something a little more unsettling. Nat Friedman, the former CEO of GitHub, set out to try and understand how much microplastic is in our food supply. He and a team tested hundreds of samples from foods drawn from the San Francisco Bay Area (but very many of which are nationally distributed). The results are pretty shocking. As a disclaimer on the site reads: we have refrained from drawing high-confidence conclusions from these results, and we think that you should, too. Consider this a snapshot of our raw test results, suitable as a starting point and inspiration for further work, but not solid enough on its own to draw conclusions or make policy recommendations or even necessarily to alter your personal purchasing decisions. With that said:check it out.
    0 Comments ·0 Shares ·64 Views
  • The Download: IVF embryo limbo, and Anthropic on AI agents
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Inside the strange limbo facing millions of IVF embryos Millions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates. At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, theyre human cells and nothing else. To others, theyre morally equivalent to children. Many feel they exist somewhere between those two extremes. While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them?Read the full story. Jessica Hamzelou Anthropics chief scientist on 5 ways agents will be even better in 2025 Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you. Computer use is a glimpse of whats to come for agents. To learn whats coming next, MIT Technology Review talked to Anthropics cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025. Melissa Heikkil & Will Douglas Heaven Small language models: 10 Breakthrough Technologies 2025 Make no mistake: Size matters in the AI world. When OpenAI launched GPT-3 back in 2020, it was the largest language model ever built. The firm showed that supersizing this type of model was enough to send performance through the roof. That kicked off a technology boom that has been sustained by bigger models ever since. But as the marginal gains for new high-end models trail off, researchers are figuring out how to do more with less. For certain tasks, smaller models that are trained on more focused data sets can now perform just as well as larger onesif not better. Read the full story.Will Douglas Heaven Small language models is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Blue Origins rocket launch has been cancelled Its engineers were unable to fix an issue with the New Glenn rockets vehicle subsystem. (BBC)+ Its also likely that ice blocked an essential vent line designed to expel gas. (Ars Technica)+ The company is yet to announce a rescheduled launch date. (The Verge)2 How is Donald Trump planning to save TikTok, exactly? Its unclear whether his supposed deal-making prowess will hold any sway here. (WP $)+ TikTok founder Zhang Yiming might have a few ideas. (WSJ $)+ It looks as though the US Supreme Court is leaning towards banning the app. (Forbes $)+ The depressing truth about TikToks impending ban. (MIT Technology Review)3 The Biden administrations final chip export curb is here The policy is designed to make it harder for China to circumvent restrictions. (FT $)+ Australia, Japan, South Korea and Taiwan wont be restricted under the new rules. (CNN)+ Nvidia thinks all these sanctions are only backfiring on the US. (Quartz)4 Big Techs leaders are lining up to attend Trumps inauguration Silicon Valleys sucking up continues. (Bloomberg $)+ Mark Zuckerberg appears to be doing his best to secure an invite. (NYT $)+ He seems to be entering Founder Mode in a bid to impress Trump. (The Verge)5 AI financial advisers are going after broke young people Its money management tips come with a hefty price tag. (Wired $) 6 Neuralink has implanted a brain device in a third person, according to MuskAhead of its plans to insert up to 30 devices this year. (Fortune $) + Beyond Neuralink: Meet the other companies developing brain-computer interfaces. (MIT Technology Review)7 The future of self-driving cars is cleaved in two Companies are divided over whether well hail or own future autonomous vehicles. (NY Mag $)+ How Wayves driverless cars will meet one of their biggest challenges yet. (MIT Technology Review)8 Smartwatches are out, old-school watches are inIts hard to beat a wristwatch when it comes to luxury status symbols. (The Guardian) 9 Notre-Dame cathedral is full of hidden speakers And you can fit out your home with them toofor a price. (FT $)10 How to free up space on your iPhone Dont be afraid to purge those ancient duplicate photos. (WSJ $)Quote of the day I'm worried about everything." Jeff Bezos describes his (well-placed) nerves to Ars Technica ahead of his rocket company Blue Origins first orbital launchwhich was later called off over technical issues. The big story AI was supposed to make police bodycams better. What happened? April 2024 When police departments first started buying and deploying bodycams in the wake of the police killing of Michael Brown in Ferguson, Missouri, a decade ago, activists hoped it would bring about real change. Years later, despite whats become a multibillion-dollar market for these devices, the tech is far from a panacea. Most footage they generate goes unwatched. Officers often don't use them properly. And if they do finally provide video to the public, it usually doesnt tell the complete story. A handful of AI startups see this problem as an opportunity to create what are essentially bodycam-to-text programs for different players in the legal system, mining this footage for misdeeds. But like the bodycams themselves, the technology still faces procedural, legal, and cultural barriers to success. Read the full story. Patrick Sisson We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + The first big fashion trend of 2025? Were all going basic.+ Spoilers aheadthis list of the best film endings is great funincluding that infamous lingering final shot from Psycho.+ If parts of your life could be better, its time to embrace the tiny changes that can make a real difference.+ This Brazilian banana bread recipe sounds beyond delicious.
    0 Comments ·0 Shares ·103 Views
  • Inside the strange limbo facing millions of IVF embryos
    www.technologyreview.com
    Lisa Holligan already had two children when she decided to try for another baby. Her first two pregnancies had come easily. But for some unknown reason, the third didnt. Holligan and her husband experienced miscarriage after miscarriage after miscarriage. Like many other people struggling to conceive, Holligan turned to in vitro fertilization, or IVF. The technology allows embryologists to take sperm and eggs and fuse them outside the body, creating embryos that can then be transferred into a persons uterus. The fertility clinic treating Holligan was able to create six embryos using her eggs and her husbands sperm. Genetic tests revealed that only three of these were genetically normal. After the first was transferred, Holligan got pregnant. Then she experienced yet another miscarriage. I felt numb, she recalls. But the second transfer, which took place several months later, stuck. And little Quinn, who turns four in February, was the eventual happy result. She is the light in our lives, says Holligan. Holligan, who lives in the UK, opted to donate her genetically abnormal embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesnt know what to do with it. Should she and her husband donate it to another family? Destroy it? Its almost four years down the line, and we still havent done anything with [the embryo], she says. The clinic hasnt been helpfulHolligan doesnt remember talking about what to do with leftover embryos at the time, and no one there has been in touch with her for years, she says. Holligans embryo is far from the only one in this peculiar limbo. Millionsor potentially tens of millionsof embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates. At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections. The problem is that no one can really agree on what that status is. To some, theyre human cells and nothing else. To others, theyre morally equivalent to children. Many feel they exist somewhere between those two extremes. There are debates, too, over how we should classify embryos in law. Are they property? Do they have a legal status? These questions are important: There have been multiple legal disputes over who gets to use embryos, who is responsible if they are damaged, and who gets the final say over their fate. And the answers will depend not only on scientific factors, but also on ethical, cultural, and religious ones. The options currently available to people with leftover IVF embryos mirror this confusion. As a UK resident, Holligan can choose to discard her embryos, make them available to other prospective parents, or donate them for research. People in the US can also opt for adoption, placing their embryos with families they get to choose. In Germany, people are not typically allowed to freeze embryos at all. And in Italy, embryos that are not used by the intended parents cannot be discarded or donated. They must remain frozen, ostensibly forever. While these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Meanwhile, many of these same people are trying to find ways to bring down the total number of embryos in storage. Maintenance costs are high. Some clinics are running out of space. And with a greater number of embryos in storage, there are more opportunities for human error. They are grappling with how to get a handle on the growing number of embryos stuck in storage with nowhere to go. The embryo boom There are a few reasons why this has become such a conundrum. And they largely come down to an increasing demand for IVF and improvements in the way it is practiced. Its a problem of our own creation, says Pietro Bortoletto, a reproductive endocrinologist at Boston IVF in Massachusetts. IVF has only become as successful as it is today by generating lots of excess eggs and embryos along the way, he says. To have the best chance of creating healthy embryos that will attach to the uterus and grow in a successful pregnancy, clinics will try to collect multiple eggs. People who undergo IVF will typically take a course of hormone injections to stimulate their ovaries. Instead of releasing a single egg that month, they can expect to produce somewhere between seven and 20 eggs. These eggs can be collected via a needle that passes through the vagina and into the ovaries. The eggs are then taken to a lab, where they are introduced to sperm. Around 70% to 80% of IVF eggs are successfully fertilized to create embryos. The embryos are then grown in the lab. After around five to seven days an embryo reaches a stage of development at which it is called a blastocyst, and it is ready to be transferred to a uterus. Not all IVF embryos reach this stage, howeveronly around 30% to 50% of them make it to day five. This process might leave a person with no viable embryos. It could also result in more than 10, only one of which is typically transferred in each pregnancy attempt. In a typical IVF cycle, one embryo might be transferred to the persons uterus fresh, while any others that were created are frozen and stored. IVF success rates have increased over time, in large part thanks to improvements in this storage technology. A little over a decade ago, embryologists tended to use a slow freeze technique, says Bortoletto, and many embryos didnt survive the process. Embryos are now vitrified instead, using liquid nitrogen to rapidly cool them from room temperature to -196 C in less than two seconds. Vitrification essentially turns all the water in the embryos into a glasslike state, avoiding the formation of damaging ice crystals. Now, clinics increasingly take a freeze all approach, in which they cryopreserve all the viable embryos and dont start transferring them until later. In some cases, this is so that the clinic has a chance to perform genetic tests on the embryo they plan to transfer. An assortment of sperm and embryos, preserved in liquid nitrogen.ALAMY Once a lab-grown embryo is around seven days old, embryologists can remove a few cells for preimplantation genetic testing (PGT), which screens for genetic factors that might make healthy development less likely or predispose any resulting children to genetic diseases. PGT is increasingly popular in the USin 2014, it was used in 13% of IVF cycles, but by 2016, that figure had increased to 27%. Embryos that undergo PGT have to be frozen while the tests are run, which typically takes a week or two, says Bortoletto: You cant continue to grow them until you get those results back. And there doesnt seem to be a limit to how long an embryo can stay in storage. In 2022, a couple in Oregon had twins who developed from embryos that had been frozen for 30 years. Put this all together, and its easy to see how the number of embryos in storage is rocketing. Were making and storing more embryos than ever before. When you combine that with the growing demand for IVF, which is increasing in use by the year, perhaps its not surprising that the number of embryos sitting in storage tanks is estimated to be in the millions. I say estimated, because no one really knows how many there are. In 2003, the results of a survey of fertility clinics in the US suggested that there were around 400,000 in storage. Ten years later, in 2013, another pair of researchers estimated that, in total, around 1.4 million embryos had been cryopreserved in the US. But Alana Cattapan, now a political scientist at the University of Waterloo in Ontario, Canada, and her colleagues found flaws in the study and wrote in 2015 that the number could be closer to 4 million. That was a decade ago. When I asked embryologists what they thought the number might be in the US today, I got responses between 1 million and 10 million. Bortoletto puts it somewhere around 5 million. Globally, the figure is much higher. There could be tens of millions of embryos, invisible to the naked eye, kept in a form of suspended animation. Some for months, years, or decades. Others indefinitely. Stuck in limbo In theory, people who have embryos left over from IVF have a few options for what to do with them. They could donate the embryos for someone else to use. Often this can be done anonymously (although genetic tests might later reveal the biological parents of any children that result). They could also donate the embryos for research purposes. Or they could choose to discard them. One way to do this is to expose the embryos to air, causing the cells to die. Studies suggest that around 40% of people with cryopreserved embryos struggle to make this decision, and that many put it off for five years or more. For some people, none of the options are appealing. In practice, too, the available options vary greatly depending on where you are. And many of them lead to limbo. Take Spain, for example, which is a European fertility hub, partly because IVF there is a lot cheaper than in other Western European countries, says Giuliana Baccino, managing director of New Life Bank, a storage facility for eggs and sperm in Buenos Aires, Argentina, and vice chair of the European Fertility Society. Operating costs are low, and theres healthy competitionthere are around 330 IVF clinics operating in Spain. (For comparison, there are around 500 IVF clinics in the US, which has a population almost seven times greater.) Baccino, who is based in Madrid, says she often hears of foreign patients in their late 40s who create eight or nine embryos for IVF in Spain but end up using only one or two of them. They go back to their home countries to have their babies, and the embryos stay in Spain, she says. These individuals often dont come back for their remaining embryos, either because they have completed their families or because they age out of IVF eligibility (Spanish clinics tend not to offer the treatment to people over 50). An embryo sample is removed from cryogenic storage.GETTY IMAGES In 2023, the Spanish Fertility Society estimated that there were 668,082 embryos in storage in Spain, and that around 60,000 of them were in a situation of abandonment. In these cases the clinics might not be able to reach the intended parents, or might not have a clear directive from them, and might not want to destroy any embryos in case the patients ask for them later. But Spanish clinics are wary of discarding embryos even when they have permission to do so, says Baccino. We always try to avoid trouble, she says. And we end up with embryos in this black hole. This happens to embryos in the US, too. Clinics can lose touch with their patients, who may move away or forget about their remaining embryos once they have completed their families. Other people may put off making decisions about those embryos and stop communicating with the clinic. In cases like these, clinics tend to hold onto the embryos, covering the storage fees themselves. Nowadays clinics ask their patients to sign contracts that cover long-term storage of embryosand the conditions of their disposal. But even with those in hand, it can be easier for clinics to leave the embryos in place indefinitely. Clinics are wary of disposing of them without explicit consent, because of potential liability, says Cattapan, who has researched the issue. People put so much time, energy, money into creating these embryos. What if they come back? Bortolettos clinic has been in business for 35 years, and the handful of sites it operates in the US have a total of over 47,000 embryos in storage, he says. Our oldest embryo in storage was frozen in 1989, he adds. Some people may not even know where their embryos are. Sam Everingham, who founded and directs Growing Families, an organization offering advice on surrogacy and cross-border donations, traveled with his partner from their home in Melbourne, Australia, to India to find an egg donor and surrogate back in 2009. It was a Wild West back then, he recalls. Everingham and his partner used donor eggs to create eight embryos with their sperm. Everingham found the experience of trying to bring those embryos to birth traumatic. Baby Zac was stillborn. Baby Ben died at seven weeks. We picked ourselves up and went again, he recalls. Two embryo transfers were successful, and the pair have two daughters today. But the fate of the rest of their embryos is unclear. Indias government decided to ban commercial surrogacy for foreigners in 2015, and Everingham lost track of where they are. He says hes okay with that. As far as hes concerned, those embryos are just cells. He knows not everyone feels the same way. A few days before we spoke, Everingham had hosted a couple for dinner. They had embryos in storage and couldnt agree on what to do with them. The mother wanted them donated to somebody, says Everingham. Her husband was very uncomfortable with the idea. [They have] paid storage fees for 14 years for those embryos because neither can agree on what to do with them, says Everingham. And this is a very typical scenario. Lisa Holligans experience is similar. Holligan thought shed like to donate her last embryo to another personsomeone else who might have been struggling to conceive. But my husband and I had very different views on it, she recalls. He saw the embryo as their child and said he wouldnt feel comfortable with giving it up to another family. I started having these thoughts about a child coming to me when theyre older, saying theyve had a terrible life, and [asking] Why didnt you have me? she says. After all, her daughter Quinn began as an embryo that was in storage for months. She was frozen in time. She could have been frozen for five years like [the leftover] embryo and still be her, she says. I know it sounds a bit strange, but this embryo could be a child in 20 years time. The science is just mind-blowing, and I think I just block it out. Its far too much to think about. No choice at all Choosing the fate of your embryos can be difficult. But some people have no options at all. This is the case in Italy, where the laws surrounding assisted reproductive technology have grown increasingly restrictive. Since 2004, IVF has been accessible only to heterosexual couples who are either married or cohabiting. Surrogacy has also been prohibited in the country for the last 20 years, and in 2024, it was made a universal crime. The move means Italians can be prosecuted for engaging in surrogacy anywhere in the world, a position Italy has also taken on the crimes of genocide and torture, says Sara Dalla Costa, a lawyer specializing in assisted reproduction and an IVF clinic manager at Instituto Bernabeu on the outskirts of Venice. The law surrounding leftover embryos is similarly inflexible. Dalla Costa says there are around 900,000 embryos in storage in Italy, basing the estimate on figures published in 2021 and the number of IVF cycles performed since then. By law, these embryos cannot be discarded. They cannot be donated to other people, and they cannot be used for research. Even when genetic tests show that the embryo has genetic features making it incompatible with life, it must remain in storage, forever, says Dalla Costa. There are a lot of patients that want to destroy embryos, she says. For that, they must transfer their embryos to Spain or other countries where it is allowed. Even people who want to use their embryos may age out of using them. Dalla Costa gives the example of a 48-year-old woman who undergoes IVF and creates five embryos. If the first embryo transfer happens to result in a successful pregnancy, the other four will end up in storage. Once she turns 50, this woman wont be eligible for IVF in Italy. Her remaining embryos become stuck in limbo. They will be stored in our biobanks forever, says Dalla Costa. Dalla Costa says she has a lot of examples of couples who separate after creating embryos together. For many of them, the stored embryos become a psychological burden. With no way of discarding them, these couples are forever connected through their cryopreserved cells. A lot of our patients are stressed for this reason, she says. Earlier this year, one of Dalla Costas clients passed away, leaving behind the embryos shed created with her husband. He asked the clinic to destroy them. In cases like these, Dalla Costa will contact the Italian Ministry of Health. She has never been granted permission to discard an embryo, but she hopes that highlighting cases like these might at least raise awareness about the dilemmas the countrys policies are creating for some people. Snowflakes and embabies In Italy, embryos have a legal status. They have protected rights and are viewed almost as children. This sentiment isnt specific to Italy. It is shared by plenty of individuals who have been through IVF. Some people call them embabies or freezer babies, says Cattapan. It is also shared by embryo adoption agencies in the US. Beth Button is executive director of one such program, called Snowflakesa division of Nightlight Christian Adoptions agency, which considers cryopreserved embryos to be children, frozen in time, waiting to be born. Snowflakes matches embryo donors, or placing families, with recipients, termed adopting families. Both parties share their information and essentially get to choose who they donate to or receive from. By the end of 2024, 1,316 babies had been born through the Snowflakes embryo adoption program, says Button. Button thinks that far too many embryos are being created in IVF labs around the US. Around 10 years ago, her agency received a donation from a couple that had around 38 leftover embryos to donate. We really encourage [people with leftover embryos in storage] to make a decision [about their fate], even though its an emotional, difficult decision, she says. Obviously, we just try to keep [that discussion] focused on the child, she says. Is it better for these children to be sitting in a freezer, even though that might be easier for you, or is it better for them to have a chance to be born into a loving family? That kind of pushes them to the point where theyre ready to make that decision. Button and her colleagues feel especially strongly about embryos that have been in storage for a long time. These embryos are usually difficult to place, because they are thought to be of poorer quality, or less likely to successfully thaw and result in a healthy birth. The agency runs a program called Open Hearts specifically to place them, along with others that are harder to match for various reasons. People who accept one but fail to conceive are given a shot with another embryo, free of charge. These nitrogen tanks at New Hope Fertility Center in New York hold tens of thousands of frozen embryos and eggs.GETTY IMAGES We have seen perfectly healthy children born from very old embryos, [as well as] embryos that were considered such poor quality that doctors didnt even want to transfer them, says Button. Right now, we have a couple who is pregnant with [an embryo] that was frozen for 30 and a half years. If that pregnancy is successful, that will be a record for us, and I think it will be a worldwide record as well. Many embryologists bristle at the idea of calling an embryo a child, though. Embryos are property. They are not unborn children, says Bortoletto. In the best case, embryos create pregnancies around 65% of the time, he says. They are not unborn children, he repeats. Person or property? In 2020, an unauthorized person allegedly entered an IVF clinic in Alabama and pulled frozen embryos from storage, destroying them. Three sets of intended parents filed suit over their wrongful death. A trial court dismissed the claims, but the Alabama Supreme Court disagreed, essentially determining that those embryos were people. The ruling shocked many and was expected to have a chilling effect on IVF in the state, although within a few weeks, the state legislature granted criminal and civil immunity to IVF clinics. But the Alabama decision is the exception. While there are active efforts in some states to endow embryos with the same legal rights as people, a move that could potentially limit access to abortion, most of the [legal] rulings in this area have made it very clear that embryos are not people, says Rich Vaughn, an attorney specializing in fertility law and the founder of the US-based International Fertility Law Group. At the same time, embryos are not just property. Theyre something in between, says Vaughn. Theyre sort of a special type of property. UK law takes a similar approach: The language surrounding embryos and IVF was drafted with the idea that the embryo has some kind of special status, although it was never made entirely clear exactly what that special status is, says James Lawford Davies, a solicitor and partner at LDMH Partners, a law firm based in York, England, that specializes in life sciences. Over the years, the language has been tweaked to encompass embryos that might arise from IVF, cloning, or other means; it is a bit of a fudge, says Lawford Davies. Today, the officialif somewhat circularlegal definition in the Human Fertilisation and Embryology Act reads: embryo means a live human embryo. And while people who use their eggs or sperm to create embryos might view these embryos as theirs, according to UK law, embryos are more like a stateless bundle of cells, says Lawford Davies. Theyre not quite propertypeople dont own embryos. They just have control over how they are used. Many legal disputes revolve around who has control. This was the experience of Natallie Evans, who created embryos with her then partner Howard Johnston in the UK in 2001. The couple separated in 2002. Johnston wrote to the clinic to ask that their embryos be destroyed. But Evans, who had been diagnosed with ovarian cancer in 2001,argued that Johnston had already consented to their creation, storage, and use and should not be allowed to change his mind. The case eventually made it to the European Court of Human Rights, and Evans lost. The case set a precedent that consent was key and could be withdrawn at any time. In Italy, on the other hand, withdrawing consent isnt always possible. In 2021, a case like Natallie Evanss unfolded in the Italian courts: A woman who wanted to proceed with implantation after separating from her partner went to court for authorization. She said that it was her last chance to be a mother, says Dalla Costa. The judge ruled in her favor. Dalla Costas clinics in Italy are now changing their policies to align with this decision. Male partners must sign a form acknowledging that they cannot prevent embryos from being used once theyve been created. The US situation is even more complicated, because each state has its own approach to fertility regulation. When I looked through a series of published legal disputes over embryos, I found little consistencysometimes courts ruled to allow a woman to use an embryo without the consent of her former partner, and sometimes they didnt. Some states have comprehensive legislation; some do not, says Vaughn. Some have piecemeal legislation, some have only case law, some have all of the above, some have none of the above. The meaning of an embryo So how should we define an embryo? Its the million-dollar question, says Heidi Mertes, a bioethicist at Ghent University in Belgium. Some bioethicists and legal scholars, including Vaughn, think wed all stand to benefit from clear legal definitions. Risa Cromer, a cultural anthropologist at Purdue University in Indiana, who has spent years researching the field, is less convinced. Embryos exist in a murky, in-between state, she argues. You can (usually) discard them, or transfer them, but you cant sell them. You can make claims against damages to them, but an embryo is never viewed in the same way as a car, for example. It doesnt fit really neatly into that property category, says Cromer. But, very clearly, it doesnt fit neatly into the personhood category either. And there are benefits to keeping the definition vague, she adds: There is, I think, a human need for there to be a wide range of interpretive space for what IVF embryos are or could be. Thats because we dont have a fixed moral definition of what an embryo is. Embryos hold special value even for people who dont view them as children. They hold potential as human life. They can come to represent a fertility journeyone that might have been expensive, exhausting, and traumatizing. Even for people who feel like theyre just cells, it still cost a lot of time, money, [and effort] to get those [cells], says Cattapan. I think its an illusion that we might all agree on what the moral status of an embryo is, Mertes says. In the meantime, a growing number of embryologists, ethicists, and researchers are working to persuade fertility clinics and their patients not to create or freeze so many embryos in the first place. Early signs arent promising, says Baccino. The patients she has encountered arent particularly receptive to the idea. They think, If I will pay this amount for a cycle, I want to optimize my chances, so in my case, no, she says. She expects the number of embryos in storage to continue to grow. Holligans embryo has been in storage for almost five years. And she still doesnt know what to do with it. She tears up as she talks through her options. Would discarding the embryo feel like a miscarriage? Would it be a sad thing? If she donated the embryo, would she spend the rest of her life wondering what had become of her biological child, and whether it was having a good life? Should she hold on to the embryo for another decade in case her own daughter needs to use it at some point? The question [of what to do with the embryo] does pop into my head, but I quickly try to move past it and just say Oh, thats something Ill deal with at a later time, says Holligan. Im sure [my husband] does the same. The accumulation of frozen embryos is going to continue this way for some time until we come up with something that fully addresses everyones concerns, says Vaughn. But will we ever be able to do that? Im an optimist, so Im gonna say yes, he says with a hopeful smile. But I dont know at the moment.
    0 Comments ·0 Shares ·77 Views
  • Anthropics chief scientist on 5 ways agents will be even better in 2025
    www.technologyreview.com
    Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry. We believe that, in 2025, we may see the first AI agents join the workforce and materially change the output of companies, Sam Altman claimed in a blog post last week. In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary. In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you. Anthropic notes that the feature is still cumbersome and error-prone. But it is alreadyavailable to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana. Computer use is a glimpse of whats to come for agents. To learn whats coming next, MIT Technology Review talked to Anthropics cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025. (Kaplans answers have been lightly edited for length and clarity.) 1/ Agents will get better at using tools I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, theyre getting better in that direction. But another direction thats very relevant is what kinds of environments or tools the AI can use. So, like, if you go back almost 10 years now to [DeepMinds Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then thats a very restrictive environment. Its not actually useful, even if its very smart. With text models, and then multimodal models, and now computer useand perhaps in the future with roboticsyoure moving toward bringing AI into different situations and tasks, and making it useful. We were excited about computer use basically for that reason. Until recently, with large language models, its been necessary to give them a very specific prompt, give them very specific tools, and then theyre restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when theyve made mistakes, or realize when theres a high-stakes question and it needs to ask the user for feedback. 2/ Agents will understand context Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role youre in, what styles of writing or what needs you and your organization have. ANTHROPIC I think that well see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn whats useful for you. Thats underemphasized a bit with agents. Its necessary for systems to be not only useful but also safe, doing what you expected. Another thing is that a lot of tasks wont require Claude to do much reasoning. You dont need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what well see is not just more reasoning but the application of reasoning when its really useful and important, but also not wasting time when its not necessary. 3/ Agents will make coding assistants better We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities. I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI. My expectation is that well also see further improvements to coding assistants. Thats something thats been very exciting for developers. Theres just a ton of interest in using Claude 3.5 for coding, where its not just autocomplete like it was a couple of years ago. Its really understanding whats wrong with code, debugging itrunning the code, seeing what happens, and fixing it. 4/ Agents will need to be made safe We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think thats just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection. [Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.] Prompt injection is probably one of the No.1 things were thinking about in terms of, like, broader usage of agents. I think its especially important for computer use, and its something were working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldnt do. And with more advanced models, theres just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terroriststhat kind of thing. So Im really excited about how AI will be usefulits actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, therell be a lot of challenges as well. Itll be an interesting year.
    0 Comments ·0 Shares ·110 Views
  • The Download: escalating pandemic risks, and ask us anything on Reddit
    www.technologyreview.com
    Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Metas new speech policies allow the denigration of trans people The revamped guidelines now permit previously-forbidden insults and allegations. (Platformer $)+ The changes have left Metas employees feeling embarrassed and ashamed. (404 Media)+ Axed fact-checkers held an emergency meeting after Meta said it no longer required their services. (Insider $)2 The US Supreme Court will hear TikToks final plea Justices are likely to make their decision before the end of next week. (The Guardian)+ If the ban is enacted, you can probably still access TikTok via a VPN. (NYT $)+ ByteDances founder could be TikToks secret weapon. (The Information $)3 Those pictures of the Hollywood sign burning are AI-generatedAI slop is making the Los Angeles fires appear even worse than they are. (404 Media)+ Elon Musk and Donald Trump arent helping matters by spreading disinformation. (The Verge)+ AI cameras are keeping tabs on the spreading destruction in Californias hills. (Insider $)+ The scale of the destruction is truly horrifying. (NY Mag $)4 Last year was officially the hottest ever recorded The average global temperature exceeded 1.5C above the pre-industrial baseline for the first time. (New Scientist $)+ Consequently, were edging closer to breaching the Paris Agreement. (Politico)5 How to prevent another zoonotic pandemic It all hinges on early detection. (FT $)6 Foxconn has stopped sending Chinese workers to Indian iPhone factoriesIts bad news for Apple, as its likely to disrupt production. (Rest of World) 7 This new cell could change plastic surgery as we know it Lipochondrocytes have the rigidity of cartilage and the squishiness of fat. (Wired $)+ Cosmetic surgery is booming in middle-income countries. (Economist $)8 Yandexs co-founder is shaking off Putin Arkady Volozh has condemned Russias actions in the war with Ukraine, and started a new company. (Bloomberg $)+ How Russia killed its tech industry. (MIT Technology Review)
    0 Comments ·0 Shares ·70 Views
  • How the US is preparing for a potential bird flu pandemic
    www.technologyreview.com
    This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here. This week marks a strange anniversaryits five years since most of us first heard about a virus causing a mysterious pneumonia. A virus that we later learned could cause a disease called covid-19. A virus that swept the globe and has since been reported to have been responsible for over 7 million deathsand counting. I first covered the virus in an article published on January 7, 2020, which had the headline Doctors scramble to identify mysterious illness emerging in China. For that article, and many others that followed it, I spoke to people who were experts on viruses, infectious disease, and epidemiology. Frequently, their answers to my questions about the virus, how it might spread, and the risks of a pandemic were the same: We dont know. We are facing the same uncertainty now with H5N1, the virus commonly known as bird flu. This virus has been decimating bird populations for years, and now a variant is rapidly spreading among dairy cattle in the US. We know it can cause severe disease in animals, and we know it can pass from animals to people who are in close contact with them. As of this Monday this week, we also know that it can cause severe disease in peoplea 65-year-old man in Louisiana became the first person in the US to die from an H5N1 infection. Scientists are increasingly concerned about a potential bird flu pandemic. The question is, given all the enduring uncertainty around the virus, what should we be doing now to prepare for the possibility? Can stockpiled vaccines save us? And, importantly, have we learned any lessons from a covid pandemic that still hasnt entirely fizzled out? Part of the challenge here is that it is impossible to predict how H5N1 will evolve. A variant of the virus caused disease in people in 1997, when there was a small but deadly outbreak in Hong Kong. Eighteen people had confirmed diagnoses, and six of them died. Since then, there have been sporadic cases around the worldbut no large outbreaks. As far as H5N1 is concerned, weve been relatively lucky, says Ali Khan, dean of the college of public health at the University of Nebraska. Influenza presents the greatest infectious-disease pandemic threat to humans, period, says Khan. The 1918 flu pandemic was caused by a type of influenza virus called H1N1 that appears to have jumped from birds to people. It is thought to have infected a third of the worlds population, and to have been responsible for around 50 million deaths. Another H1N1 virus was responsible for the 2009 swine flu pandemic. That virus hit younger people hardest, as they were less likely to have been exposed to similar variants and thus had much less immunity. It was responsible for somewhere between 151,700 and 575,400 deaths that year. To cause a pandemic, the H5N1 variants currently circulating in birds and dairy cattle in the US would need to undergo genetic changes that allow them to spread more easily from animals to people, spread more easily between people, and become more deadly in people. Unfortunately, we know from experience that viruses need only a few such changes to become more easily transmissible. And with each and every infection, the risk that a virus will acquire these dangerous genetic changes increases. Once a virus infects a host, it can evolve and swap chunks of genetic code with any other viruses that might also be infecting that host, whether its a bird, a pig, a cow, or a person. Its a big gambling game, says Marion Koopmans, a virologist at the Erasmus University Medical Center in Rotterdam, the Netherlands. And the gambling is going on at too large a scale for comfort. There are ways to improve our odds. For the best chance at preventing another pandemic, we need to get a handle on, and limit, the spread of the virus. Here, the US could have done a better job at limiting the spread in dairy cows, says Khan. It should have been found a lot earlier, he says. There should have been more aggressive measures to prevent transmission, to recognize what disease looks like within our communities, and to protect workers. States could also have done better at testing farm workers for infection, says Koopmans. Im surprised that I havent heard of an effort to eradicate it from cattle, she adds. A country like the US should be able to do that. The good news is that there are already systems in place for tracking the general spread of flu in people. The World Health Organizations Global Influenza Surveillance and Response System collects and analyzes samples of viruses collected from countries around the world. It allows the organization to make recommendations about seasonal flu vaccines and also helps scientists track the spread of various flu variants. Thats something we didnt have for the covid-19 virus when it first took off. We are also better placed to make vaccines. Some countries, including the US, are already stockpiling vaccines that should be at least somewhat effective against H5N1 (although it is difficult to predict exactly how effective they will be against some future variant). The US Administration for Strategic Preparedness and Response plans to have up to 10 million doses of prefilled syringes and multidose vials prepared by the end of March, according to an email from a representative. The US Department of Health and Human Services has also said it will provide the pharmaceutical company Moderna with $176 million to create mRNA vaccines for pandemic influenzausing the same quick-turnaround vaccine production technology used in the companys covid-19 vaccines. Some question whether these vaccines should have already been offered to dairy farm workers in affected parts of the US. Many of these individuals have been exposed to the virus, a good chunk of them appear to have been infected with it, and some of them have become ill. If the decision had been up to Khan, he says, they would have been offered the H5N1 vaccine by now. And we should ensure they are offered seasonal flu vaccines in order to limit the risk that the two flu viruses will mingle inside one person, he adds. Others worry that 10 million vaccine doses arent enough for a country with a population of around 341 million. But health agencies walk a razor-thin line between having too much vaccine for something and not having enough, says Khan. If an outbreak never transpires, 340 million doses of vaccine will feel like an enormous waste of resources. We cant predict how well these viruses will work, either. Flu viruses mutate all the time, and even seasonal flu vaccines are notoriously unpredictable in their efficacy. I think weve become a little bit spoiled with the covid vaccines, says Koopmans. We were really, really lucky [to develop] vaccines with high efficacy. One vaccine lesson we should have learned from the covid-19 pandemic is the importance of equitable access to vaccines around the world. Unfortunately, its unlikely that we have. It is doubtful that low-income countries will have early access to [a pandemic influenza] vaccine unless the world takes action, Nicole Lurie of the Coalition for Epidemic Preparedness Innovations (CEPI) said in a recent interview for Gavi, a public-private alliance for vaccine equity. And another is the impact of vaccine hesitancy. Making vaccines might not be a problembut convincing people to take them might be, says Khan. We have an incoming administration that has lots of vaccine hesitancy, he points out. So while we may end up having vaccines available, its not very clear to me if we have the political and social will to actually implement good public health measures. This is another outcome that is impossible to predict, and I wont attempt to do so. But I am hoping that the relevant administrations will step up our defenses. And that this will be enough to prevent another devastating pandemic. Now read the rest of The Checkup Read more from MIT Technology Review's archive Bird flu has been circulating in US dairy cows for months. Virologists are worried it could stick around on US farms forever. As the virus continues to spread, the risk of a pandemic continues to rise. We still dont really know how the virus is spreading, but we do know that it is turning up in raw milk. (Please dont drink raw milk.) mRNA vaccines helped us through the covid-19 pandemic. Now scientists are working on mRNA flu vaccinesincluding universal vaccines that could protect against multiple flu viruses. The next generation of mRNA vaccines is on the way. These vaccines are self-amplifying and essentially tell the body how to make more mRNA. Maybe theres an alternative to dairy farms of the type that are seeing H5N1 in their cattle. Scientists are engineering yeasts and plants with bovine genes so they can produce proteins normally found in milk, which can be used to make spreadable cheeses and ice cream. The cofounder of one company says a factory of bubbling yeast vats could replace 50,000 to 100,000 cows. From around the web Bird flu has been circulating in US dairy cows for months. Virologists are worried it could stick around on US farms forever. As the virus continues to spread, the risk of a pandemic continues to rise. We still dont really know how the virus is spreading, but we do know that it is turning up in raw milk. (Please dont drink raw milk.) mRNA vaccines helped us through the covid-19 pandemic. Now scientists are working on mRNA flu vaccinesincluding universal vaccines that could protect against multiple flu viruses. The next generation of mRNA vaccines is on the way. These vaccines are self-amplifying and essentially tell the body how to make more mRNA. Maybe theres an alternative to dairy farms of the type that are seeing H5N1 in their cattle. Scientists are engineering yeasts and plants with bovine genes so they can produce proteins normally found in milk, which can be used to make spreadable cheeses and ice cream. The cofounder of one company says a factory of bubbling yeast vats could replace 50,000 to 100,000 cows.
    0 Comments ·0 Shares ·121 Views
  • The Download: greener steel, and what 2025 holds for climate tech
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. The worlds first industrial-scale plant for green steel promises a cleaner future As of 2023, nearly 2 billion metric tons of steel were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick. Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the worlds carbon emissionsone of the largest industrial emitters and far more than such sources as aviation.Read the full story.Douglas Main Green steel is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. 2025 is a critical year for climate tech Casey Crownhart I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies. As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. Its certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. Its time for these technologies to grow up and get out there. Read the full story.This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday. A New York legislator wants to pick up the pieces of the dead California AI bill The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. Assembly member Alex Bores hopes his bill, currently an unpublished draft that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law last year. Read the full story. Scott J Mulligan MIT Technology Review Narrated: How covid conspiracy theories led to an alarming resurgence in AIDS denialism Podcaster Joe Rogan, former presidential candidate Robert F. Kennedy Jr, and football quarterback Aaron Rodgers are all helping revive AIDS denialisma false collection of theories arguing either that HIV doesnt cause AIDS or that theres no such thing as HIV at all. These ideas were initially promoted back in the 1980s and 90s but fell out of favor, as more and more evidence stacked up against them, and as more people with HIV and AIDS started living longer lives thanks to effective new treatments. But then coronavirus arrived. This is our latest story to be turned into a MIT Technology Review Narrated podcast, whichwere publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released.Ask our journalists anything! Do you have questions about emerging technologies? Well, weve got answers. MIT Technology Reviews science and tech journalists are hosting an AMA on Reddit tomorrow at 12 pm ET. Submit your questions now! The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Wildfires are sweeping through Los Angeles Unusually strong winds and dry weather are accelerating multiple fires around the city. (Vox)+ While California is no stranger to wildfires, these are particularly awful. (The Atlantic $)+ Five people are known to have died, and thousands have lost their homes.(NY Mag $)+ The quest to build wildfire-resistant homes. (MIT Technology Review) 2 AI can now predict how the genes inside a cell will drive its behavior Scientists are hopeful it could usher in cell-specific therapies to fight genetic diseases. (WP $)+ How AI can help us understand how cells workand help cure diseases. (MIT Technology Review)3 The Biden administration is planning a further chips crackdown One of its final acts will be a push to prevent sales of chips to China and Russia. (Bloomberg $)+ A group of tech representatives is begging the US government to reconsider. (Reuters)4 Elon Musks DOGE division wants to slash $2 trillion in federal spending But even he admits its a ridiculously ambitious goal. (WSJ $)+ He reckons he might be able to cut half that amount. (NBC News)5 Meta exempted its top advertisers from content moderation processesIt agreed to suppress standard testing for high spenders. (FT $) + Mark Zuckerberg appears to be following Xs playbook. (Wired $)+ Maybe the two platforms arent so different after all. (The Atlantic $)6 How one teenager embarked on a nationwide swatting spreeAlan Filions false shooting calls sent police into hundreds of schools across the US. (Wired $) 7 Blue Origin is limbering up to launch its new Glenn rocketIn the companys very first flight. (New Scientist $) + If successful, the flight could prove Blue Origins worthiness as a SpaceX rival. (The Register)8 Grok could be getting an unhinged modeWhatever that means. (TechCrunch) + Xs chatbot was one of the biggest AI flops of 2024. (MIT Technology Review)9 The secret to scaling quantum computing? Fiber optic cables Mixing quantum data with regular ole internet gigabits is one solution. (IEEE Spectrum) 10 This robot vacuum has limbs All the better to clean your home with. (The Verge)+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review)Quote of the day I voted for TrumpI didnt vote for Elon. Preston Parra, chairman of the pro-Trump Conservative PAC, expresses his frustration with Elon Musks escalating involvement in US politics to the New York Times. The big story The weeds are winning October 2024 Since the 1980s, more and more plants have evolved to become immune to herbicides. This threatens to decrease yields, and in extreme cases can wipe out whole fields. At worst, it can even drive farmers out of business. Its the agricultural equivalent of antibiotic resistance, and it keeps getting worse. Agriculture needs to embrace a diversity of weed control practices. But thats much easier said than done. Read the full story. Douglas Main We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Andrew McCarthy has taken more than 90,000 pictures of the sun, which is pretty amazing.+ Sciences most famous dogs? Yes please.+ What better time to reorganize your kitchen cupboards than at the start of the new year?+ The Robbie Williams biopic Better Man is completely bonkersand a whole lot of fun.
    0 Comments ·0 Shares ·119 Views
  • A New York legislator wants to pick up the pieces of the dead California AI bill
    www.technologyreview.com
    The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. Its called the RAISE Act, an acronym for Responsible AI Safety and Education. Assembly member Alex Bores hopes his bill, currently an unpublished draftsubject to changethat MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law. SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support. However, before it even landed on Governor Gavin Newsoms desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed. Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause critical harm; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence. The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions. The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. A different flavour of bill The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. We focused a lot on what the feedback was for 1047, he says. Parts of the criticism were in good faith and could make improvements. And so we've made a lot of changes. The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesnt create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. The RAISE Act doesnt have SB 1047s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a kill switch. Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers cant shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models). The RAISE Act avoids the fight entirely. SB 1047 referred to an advanced persistent threat associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models. Focusing on the wrong issues? Bores bill is very specific with its definitions in an effort to clearly delineate what this bill is and isnt about. The RAISE Act doesnt address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. Some in the AI community believe this focus is misguided. Were broadly supportive of any efforts to hold large models accountable, says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research. But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether its workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections," she says. Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. Were not talking about any model that exists right now, he says. We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that. The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. This approach may draw scrutiny from industry forces. While we cant comment specifically on legislation that isnt public yet, we believe effective regulation should focus on specific applications rather than broad model categories, says a spokesperson at Hugging Face, a company that opposed SB 1047. Early days The bill is in its nascent stages, so its subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. Theres significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms, says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. When asked about the idea of mandated safety plans for AI companies, assembly member Edward Ra, a Republican who hasn't yet seen a draft of the new bill yet, said: I dont have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing. Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations. Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some wont make good decisions, and thats why a level of basic regulation for incredibly powerful technology is important, he says. He has his own plans to reignite the fight: Were not done in California. There will be continued work in California, including for next year. Im optimistic that California is gonna be able to get some good things done. And some believe the RAISE Act will highlight a notable contradiction: Many of the industrys players insist that they want regulation, but when any regulation is proposed, they fight against it. SB 1047 became a referendum on whether AI should be regulated at all, says Brennan. There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation. Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, theyve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress, says Kohler. It is likely that states will continue to step up in this area. Wieners advice for New York legislators entering the arena of AI regulation? Buckle up and get ready.
    0 Comments ·0 Shares ·114 Views
  • 2025 is a critical year for climate tech
    www.technologyreview.com
    This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies. In case you havent browsed this years list or a previous version, it features tech thats either breaking into prominence or changing society. We typically recognize a range of items running from early-stage research to consumer technologies that folks are getting their hands on now. As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. Its certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. Were facing global political shifts and entering the second half of the decade. Its time for these climate technologies to grow up and get out there. Green steel Steel is a crucial material for buildings and vehicles, and making it accounts for around 8% of global greenhouse-gas emissions. New manufacturing methods could be a huge part of cleaning up heavy industry, and theyre just on the cusp of breaking into the commercial market. One company, called Stegra, is close to starting up the worlds first commercial green steel plant, which will make the metal using hydrogen from renewable sources. (You might know this company by its former name, H2 Green Steel, as we included it on our 2023 list of Climate Tech Companies to Watch.) When I first started following Stegra a few years ago, its plans for a massive green steel plant felt incredibly far away. Now the company says its on track to produce steel at the factory by next year. The biggest challenge in this space is money. Building new steel plants is expensiveStegra has raised almost $7 billion. And the companys product will be more expensive than conventional material, so itll need to find customers willing to pay up (so far, it has). There are other efforts to clean up steel that will all face similar challenges around money, including another play in Sweden called Hybrit and startups like Boston Metal and Electra, which use different processes. Read more about green steel, and the potential obstacles it faces as we enter a new phase of commercialization, in this short blurb and in this longer feature about Stegra. Cow burp remedies Humans love burgers and steaks and milk and cheese, so we raise a whole bunch of cows. The problem is, these animals are among a group with a funky digestion process that produces a whole lot of methane (a powerful greenhouse gas). A growing number of companies are trying to develop remedies that help cut down on their methane emissions. This is one of my favorite items on the list this year (and definitely my favorite illustrationat the very least, check out this blurb to enjoy the art). Theres already a commercially available option right now: a feed additive called Bovaer from DSM-Firmenich that the company says can cut methane emissions by 30% in dairy cattle, and more in beef cattle. Startups are right behind with their own products, some of which could prove even better. A key challenge all these companies face moving forward is acceptance: from regulatory agencies, farmers, and consumers. Some companies still need to go through lengthy and often expensive tests to show that their products are safe and effective. Theyll also need to persuade farmers to get on board. Some might also face misinformation thats causing some consumers to protest these new additives. Cleaner jet fuel While planes crisscrossing the world are largely powered by fossil fuels, some alternatives are starting to make their appearance in aircraft. New fuels, today mostly made from waste products like used cooking oil, can cut down emissions from air travel. In 2024, they made up about 0.5% of the fuel supply. But new policies could help these fuels break into new prominence, and new options are helping to widen their supply. The key challenge here is scale. Global demand for jet fuel was about 100 billion gallons last year, so well need a whole lot of volume from new producers to make a dent in aviations emissions. To illustrate the scope, take LanzaJets new plant, opened in 2024. Its the first commercial-scale facility that can make jet fuel with ethanol, and it has a capacity of about 9 million gallons annually. So we would need about 10,000 of those plants to meet global demanda somewhat intimidating prospect. Read more in my write-up here. From cow burps to jet fuel to green steel, theres a huge range of tech thats entering a new stage of deployment and will need to face new challenges in the next few years. Well be watching it allthanks for coming along. Now read the rest of The Spark Related reading Check out our full list of 2025s Breakthrough Technologies here. Theres also a poll where you can vote for what you think the 11th item should be. Im not trying to influence anyones vote, but I think methane-detecting satellites are pretty interestingjust saying This package is part of our January/February print issue, which also includes stories on: This system thats tracking early warning signs of infection in wheat crops How wind could be a low-tech solution to help clean up shipping Efforts to use human waste in agriculture JUSTIN SULLIVAN/GETTY Another thing EVs are (mostly) set for solid growth in 2025, as my colleague James Temple covers in his newest story. Check it out for more about whats next for electric vehicles, including what we might expect from a new administration in the US and how China is blowing everyone else out of the water. Keeping up with climate Winter used to be the one time of year that California didnt have to worry about wildfires. A rapidly spreading fire in the southern part of the state is showing thats not the case anymore. (Bloomberg) Teslas annual sales decline for the first time in over a decade. Deliveries were lower than expected for the final quarter of the year. (Associated Press) Meanwhile, in China, EVs are set to overtake traditional cars in sales years ahead of schedule. Forecasts suggest that EVs could account for 50% of car sales this year. (Financial Times) KoBold metals raised $537 million in funding to use AI to mine copper. The funding pushes the startups valuation to $2.96 billion. (TechCrunch) Read this profile of the company from 2021 for more. (MIT Technology Review)We finally have the final rules for a tax credit designed to boost hydrogen in the US. The details matter here. (Heatmap) China just approved the worlds most expensive infrastructure project. The hydroelectric dam could produce enough power for 300 million people, triple the capacity of the current biggest dam. (Economist) In 1979, President Jimmy Carter installed 32 solar panels on the White Houses roof. Although they came down just a few years later, the panels lived multiple lives afterward. I really enjoyed reading about this small piece of Carters legacy in the wake of his passing. (New York Times) An open pit mine in California is the only one in the US mining and extracting rare earth metals including neodymium and praseodymium. This is a fascinating look at the site. (IEEE Spectrum) I wrote about efforts to recycle rare earth metals, and what it means for the long-term future of metal supply, in a feature story last year. (MIT Technology Review)
    0 Comments ·0 Shares ·113 Views
  • The Download: whats next for AI, and stem-cell therapies
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Whats next for AI in 2025 For the last couple of years weve had a go at predicting whats coming next in AI. A fools game given how fast this industry moves. But were on a roll, and were doing it again. How did we score last time round? Our four hot trends to watch out for in 2024 pretty much nailed it by including what we called customized chatbots (we didnt know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now), generative video, and more general-purpose robots that can do a wider range of tasks.So whats coming in 2025? Here are five picks from our AI team. James O'Donnell, Will Douglas Heaven & Melissa Heikkil This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Stem-cell therapies that work: 10 Breakthrough Technologies 2025 A quarter-century ago, researchers isolated powerful stem cells from embryos created through in vitro fertilization. These cells, theoretically able to morph into any tissue in the human body, promised a medical revolution. Think: replacement parts for whatever ails you. But stem-cell science didnt go smoothly. Even though scientists soon learned to create these make-anything cells without embryos, coaxing them to become truly functional adult tissue proved harder than anyone guessed. Now, though, stem cells are finally on the brink of delivering. Read the full story.Stem-cell therapies is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthroughyou have until 1 April! The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Meta will no longer employ fact-checkers Instead, it will outsource fact verification to its users. (NYT $)+ What could possibly go wrong!? (WSJ $)+ The third party groups it employed say they were blindsided by the decision. (Wired $)2 American workers are increasingly worried about robots The wave of automation threatening their jobs is only growing stronger. (FT $)+ Will we ever trust robots? (MIT Technology Review)3 NASA isnt sure how to bring Martian rocks and soil to Earth Its enormously expensive, and we cant guarantee itll contain the first evidence of extraterrestrial life we hope it does. (WP $)+ NASA is letting Trump decide how to do it(NYT $)4 Meta has abandoned its Quest Pro headset What does this tell us about the state of consumer VR? Nothing good. (Fast Company $)+ Turns out people dont want to spend $1,000 on a headset. (Forbes $)5 The man who blew up a Cybertruck used ChatGPT to plan the attack He asked the chatbot how much explosive was needed to trigger the blast. (Reuters) 6 Hackers claim to have stolen a huge amount of location dataIts a nightmare scenario for privacy advocates. (404 Media) 7 A bitcoin investor has been ordered to disclose secret codesFrank Richard Ahlgren III has been sentenced for tax fraud, and owes the US government more than $1 million. (Bloomberg $) 8 The world is far more interconnected than we realizedNetworks of bacteria in the ocean are shedding new light on old connections. (Quanta Magazine) 9 The social web isnt made for everyone Its constant updates are a nightmare for people with cognitive decline. (The Atlantic $)+ How to fix the internet. (MIT Technology Review)10 Is Elon Musk really one of the worlds top Diablo players? His ranking suggests he plays all day, every day. (WSJ $)Quote of the day We have completely lost the plot. A Meta employee laments the companys decision to hire new board member Dana White, 404 Media reports. The big story How generative AI could reinvent what it means to play June 2024 To make them feel alive, open-world games like Red Dead Redemption 2 are inhabited by vast crowds of computer-controlled characters. These animated peoplecalled NPCs, for nonplayer charactersmake these virtual worlds feel lived in and full. Oftenbut not alwaysyou can talk to them. After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. Its still fun, but the illusion starts to weaken when you poke at it. It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end. Read the full story. Niall Firth We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Why Feathers McGraw is cinemas most sinister villain, bar none. ($)+ Intrepid supper clubs sound terrible, but these other travel trends for 2025 are intriguing.+ Steve Young is a literal pinball wizard, restoring 70-year old machines for the future generations to enjoy.+ Its time to pay our respects to a legend: Perry, the donkey who inspired Shreks four-legged sidekick, is no more.
    0 Comments ·0 Shares ·119 Views
More Stories