Massachusetts Institute of Technology (MIT)
Massachusetts Institute of Technology (MIT)
The Massachusetts Institute of Technology is a world leader in research and education.
2 people like this
238 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • How a top Chinese AI model overcame US sanctions
    www.technologyreview.com
    The AI community is abuzz over DeepSeek R1, a new open-source reasoning model. The model was developed by the Chinese AI startup DeepSeek, which claims that R1 matches or even surpasses OpenAIs ChatGPT o1 on multiple key benchmarks but operates at a fraction of the cost. This could be a truly equalizing breakthrough that is great for researchers and developers with limited resources, especially those from the Global South, says Hancheng Cao, an assistant professor in information systems at Emory University. DeepSeeks success is even more remarkable given the constraints facing Chinese AI companies in the form of increasing US export controls on cutting-edge chips. But early evidence shows that these measures are not working as intended. Rather than weakening Chinas AI capabilities, the sanctions appear to be driving startups like DeepSeek to innovate in ways that prioritize efficiency, resource-pooling, and collaboration. To create R1, DeepSeek had to rework its training process to reduce the strain on its GPUs, a variety released by Nvidia for the Chinese market that have their performance capped at half the speed of its top products, according to Zihan Wang, a former DeepSeek employee and current PhD student in computer science at Northwestern University. DeepSeek R1 has been praised by researchers for its ability to tackle complex reasoning tasks, particularly in mathematics and coding. The model employs a chain of thought approach similar to that used by ChatGPT o1, which lets it solve problems by processing queries step by step. Dimitris Papailiopoulos, principal researcher at Microsofts AI Frontiers research lab, says what surprised him the most about R1 is its engineering simplicity. DeepSeek aimed for accurate answers rather than detailing every logical step, significantly reducing computing time while maintaining a high level of effectiveness, he says. DeepSeek has also released six smaller versions of R1 that are small enough to run locally on laptops. It claims that one of them even outperforms OpenAIs o1-mini on certain benchmarks.DeepSeek has largely replicated o1-mini and has open sourced it, tweeted Perplexity CEO Aravind Srinivas. DeepSeek did not reply to MIT Technology Reviews request for comments. Despite the buzz around R1, DeepSeek remains relatively unknown. Based in Hangzhou, China, it was founded in July 2023 by Liang Wenfeng, an alumnus of Zhejiang University with a background in information and electronic engineering. It was incubated by High-Flyer, a hedge fund that Liang founded in 2015. Like Sam Altman of OpenAI, Liang aims to build artificial general intelligence (AGI), a form of AI that can match or even beat humans on a range of tasks. Training large language models (LLMs) requires a team of highly trained researchers and substantial computing power. In a recent interview with the Chinese media outlet LatePost, Kai-Fu Lee, a veteran entrepreneur and former head of Google China, said that only front-row players typically engage in building foundation models such as ChatGPT, as its so resource-intensive. The situation is further complicated by the US export controls on advanced semiconductors. High-Flyers decision to venture into AI is directly related to these constraints, however. Long before the anticipated sanctions, Liang acquired a substantial stockpile of Nvidia A100 chips, a type now banned from export to China. The Chinese media outlet 36Kr estimates that the company has over 10,000 units in stock, but Dylan Patel, founder of the AI research consultancy SemiAnalysis, estimates that it has at least 50,000. Recognizing the potential of this stockpile for AI training is what led Liang to establish DeepSeek, which was able to use them in combination with the lower-power chips to develop its models. Tech giants like Alibaba and ByteDance, as well as a handful of startups with deep-pocketed investors, dominate the Chinese AI space, making it challenging for small or medium-sized enterprises to compete. A company like DeepSeek, which has no plans to raise funds, is rare. Zihan Wang, the former DeepSeek employee, told MIT Technology Review that he had access to abundant computing resources and was given freedom to experiment when working at DeepSeek, a luxury that few fresh graduates would get at any company. In an interview with the Chinese media outlet 36Kr in July 2024 Liang said that an additional challenge Chinese companies face on top of chip sanctions, is that their AI engineering techniques tend to be less efficient. We [most Chinese companies] have to consume twice the computing power to achieve the same results. Combined with data efficiency gaps, this could mean needing up to four times more computing power. Our goal is to continuously close these gaps, he said. But DeepSeek found ways to reduce memory usage and speed up calculation without significantly sacrificing accuracy. The team loves turning a hardware challenge into an opportunity for innovation, says Wang. Liang himself remains deeply involved in DeepSeeks research process, running experiments alongside his team. The whole team shares a collaborative culture and dedication to hardcore research, Wang says. As well as prioritizing efficiency, Chinese companies are increasingly embracing open-source principles. Alibaba Cloud has released over 100 new open-source AI models, supporting 29 languages and catering to various applications, including coding and mathematics. Similarly, startups like Minimax and 01.AI have open-sourced their models. According to a white paper released last year by the China Academy of Information and Communications Technology, a state-affiliated research institute, the number of AI large language models worldwide has reached 1,328, with 36% originating in China. This positions China as the second-largest contributor to AI, behind the United States. This generation of young Chinese researchers identify strongly with open-source culture because they benefit so much from it, says Thomas Qitong Cao, an assistant professor of technology policy at Tufts University. The US export control has essentially backed Chinese companies into a corner where they have to be far more efficient with their limited computing resources, says Matt Sheehan, an AI researcher at the Carnegie Endowment for International Peace. We are probably going to see a lot of consolidation in the future related to the lack of compute. That might already have started to happen. Two weeks ago, Alibaba Cloud announced that it has partnered with the Beijing-based startup 01.AI, founded by Kai-Fu Lee, to merge research teams and establish an industrial large model laboratory. It is energy-efficient and natural for some kind of division of labor to emerge in the AI industry, says Cao, the Tufts professor. The rapid evolution of AI demands agility from Chinese firms to survive.
    0 Comments ·0 Shares ·12 Views
  • The Download: OpenAIs agent, and what to expect from robotics
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. OpenAI launches Operatoran agent that can use a computer for you Whats new: After weeks of buzz, OpenAI has released Operator, its first AI agent. Operator is a web app that can carry out simple online tasks in a browser, such as booking concert tickets or filling an online grocery order. The app is powered by a new model called Computer-Using AgentCUA for shortbuilt on top of OpenAIs multimodal large language model GPT-4o.Why it matters: OpenAI claims that Operator outperforms similar rival tools, including Anthropics Computer Use and Google DeepMinds Mariner. The fact that three of the worlds top AI firms have converged on the same vision of what agent-based models could be makes one thing clear. The battle for AI supremacy has a new frontierand its our computer screens. Read the full story. Will Douglas Heaven + If youre interested in reading more about AI agents, check out this piece explaining why theyre AIs next big thing. Whats next for robots James ODonnell In the many conversations Ive had about robots, Ive also found that most people tend to fall into three camps. Some are upbeat and vocally hopeful that a future is just around the corner in which machines can expertly handle much of what is currently done by humans, from cooking to surgery. Others are scared: of job losses, injuries, and whatever problems may come up as we try to live side by side. The final camp, which I think is the largest, is just unimpressed. Weve been sold lots of promises that robots will transform society ever since the first robotic arm was installed on an assembly line at a General Motors plant in New Jersey in 1961. Few of those promises have panned out so far.But this year, theres reason to think that even those staunchly in the bored camp will be intrigued by whats happening in the robot races. Heres a glimpse at what to keep an eye on this year. Read the full story. This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Facebook and Instagram blocked and hid abortion pill posts But Meta denies its anything to do with its recent hate speech restriction U-turn. (NYT $)+ The companys widespread changes are making advertisers nervous. (Insider $)+ A contraceptive drug could act as an abortion pill substitute. (The Atlantic $)2 Donald Trumps staff are furious with Elon Musk His decision to trash talk the Presidents new AI deal is ruffling aides feathers. (Politico)+ For once, Trump doesnt seem to want to wade in. (CNN)+ Stargates newest data center will be built in the small Texan city of Abilene. (Bloomberg $)3 Watch the Trump administration delete agency pages in real timeAn agency GitHub records the documents, handbooks and bots as theyre deleted or amended. (404 Media) 4 Central Europes power grid is vulnerable to attack Its facilities unencrypted radio signals leave it wide open to malicious interference. (Ars Technica)+ The race to replace the powerful greenhouse gas that underpins the power grid. (MIT Technology Review) 5 OpenAIs conversion to becoming a for-profit is under investigation Californias attorney general wants to know more about its asset transfer plans. (The Markup)+ One major obstacle is determining how much equity Microsoft would hold. (FT $)6 WeRide has its sights set on becoming a driverless power playerThe Chinese company has ambitious plans to expand all over the world. (WSJ $) + Meanwhile, Tesla is issuing a safety update to 1.2 million cars in China. (Bloomberg $)+ How Wayves driverless cars will meet one of their biggest challenges yet. (MIT Technology Review)7 How fungi spores can help save endangered plantsBut its a delicate balancing act. (Knowable Magazine) + Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)8 The fight over our tech-addled attention spanIts not that we cant focusits what were focusing on. (New Yorker $) 9 TikTok is still MIA from US app stores Opportunists are flogging iPhones with the pre-installed app for eye-watering prices. (Insider $)10 How random is Spotifys shuffle, really? And can algorithms be depended on to deal in true randomness? (FT $)Quote of the day I cant imagine that I personally can make any difference in their wealth, power or influence. But I cant be a part of offering them my life and my joy to then turn it back around and make money off of me. Michael Raine, a 50-year old Facebook and Instagram user, explains to the Washington Post why he doesnt want to contribute to the sprawling wealth of Meta boss Mark Zuckerberg any more. The big story How to stop a state from sinking April 2024 In a 10-month span between 2020 and 2021, southwest Louisiana saw five climate-related disasters, including two destructive hurricanes. As if that wasnt bad enough, more storms are coming, and many areas are not prepared. But some government officials and state engineers are hoping there is an alternative: elevation. The $6.8 billion Southwest Coastal Louisiana Project is betting that raising residences by a few feet will keep Louisianans in their communities. Ultimately, its something of a last-ditch effort to preserve this slice of coastline, even as some locals pick up and move inland and as formal plans for managed retreat become more popular in climate-vulnerable areas across the country and the rest of the world. Read the full story. Xander Peters We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + How two enterprising actors staged a daring performance of Hamlet inside Grand Theft Auto + Warning: these movies are dangerous!+ Madonna released Material Girl 40 years ago this weekand changed the face of pop forever.+ And finally, what everyone has been dying to knowdo dogs really watch TV?
    0 Comments ·0 Shares ·33 Views
  • The US withdrawal from the WHO will hurt us all
    www.technologyreview.com
    This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here. On January 20, his first day in office, US president Donald Trump signed an executive order to withdraw the US from the World Health Organization. Ooh, thats a big one, he said as he was handed the document. The US is the biggest donor to the WHO, and the loss of this income is likely to have a significant impact on the organization, which develops international health guidelines, investigates disease outbreaks, and acts as an information-sharing hub for member states. But the US will also lose out. Its a very tragic and sad event that could only hurt the United States in the long run, says William Moss, an epidemiologist at Johns Hopkins Bloomberg School of Public Health in Baltimore. Trump appears to take issue with the amount the US donates to the WHO. He points out that it makes a much bigger contribution than China, a country with a population four times that of the US. It seems a little unfair to me, he said as he prepared to sign the executive order. It is true that the US is far and away the biggest financial supporter of the WHO. The US contributed $1.28 billion over the two-year period covering 2022 and 2023. By comparison, the second-largest donor, Germany, contributed $856 million in the same period. The US currently contributes 14.5% of the WHOs total budget. But its not as though the WHO sends a billion-dollar bill to the US. All member states are required to pay membership dues, which are calculated as a percentage of a countrys gross domestic product. For the US, this figure comes to $130 million. China pays $87.6 million. But the vast majority of the USs contributions to the WHO are made on a voluntary basisin recent years, the donations have been part of multibillion-dollar spending on global health by the US government. (Separately, the Bill and Melinda Gates Foundation contributed $830 million over 2022 and 2023.) Theres a possibility that other member nations will increase their donations to help cover the shortfall left by the USs withdrawal. But it is not clear who will step upor what implications it will have to change the structure of donations. Martin McKee, a professor of European public health at the London School of Hygiene and Tropical Medicine, thinks it is unlikely that European members will increase their contributions by much. China, India, Brazil, South Africa, and the Gulf states, on the other hand, may be more likely to pay more. But again, it isnt clear how this will pan out, or whether any of these countries will expect greater influence over global health policy decisions as a result of increasing their donations. WHO funds are spent on a range of global health projectsprograms to eradicate polio, rapidly respond to health emergencies, improve access to vaccines and medicines, develop pandemic prevention strategies, and more. The loss of US funding is likely to have a significant impact on at least some of these programs. Diseases dont stick to national boundaries, hence this decision is not only concerning for the US, but in fact for every country in the world, says Pauline Scheelbeek at the London School of Hygiene and Tropical Medicine.With the US no longer reporting to the WHO nor funding part of this process, the evidence on which public health interventions and solutions should be based is incomplete. Its going to hurt global health, adds Moss. Its going to come back to bite us. Theres more on how the withdrawal could affect health programs, vaccine coverage, and pandemic preparedness in this weeks coverage. Now read the rest of The Checkup Read more from MIT Technology Review's archive This isnt the first time Donald Trump has signaled his desire for the US to leave the WHO. He proposed a withdrawal during his last term, in 2020. While the WHO is not perfect, it needs more power and funding, not less, Charles Kenny, director of technology and development at the Center for Global Development, argued at the time. The move drew condemnation from those working in public health then, too. The editor in chief of the medical journal The Lancet called it a crime against humanity, as Charlotte Jee reported. In 1974, the WHO launched an ambitious program to get lifesaving vaccines to all children around the world. Fifty years on, vaccines are thought to have averted 154 million deathsincluding 146 million in children under the age of five. The WHO has also seen huge success in its efforts to eradicate polio. Today, wild forms of the virus have been eradicated in all but two countries. But vaccine-derived forms of the virus can still crop up around the world. At the end of a round of discussions in September among WHO member states working on a pandemic agreement, director-general Tedros Adhanom Ghebreyesus remarked, The next pandemic will not wait for us, whether from a flu virus like H5N1, another coronavirus, or another family of viruses we dont yet know about. The H5N1 virus has been circulating on US dairy farms for months now, and the US is preparing for potential human outbreaks. From around the web People with cancer paid $45,000 for an experimental blood-filtering treatment, delivered at a clinic in Antigua, after being misled about its effectiveness. Six of them have died since their treatments. (The New York Times) The Trump administration has instructed federal health agencies to pause all external communications, such as health advisories, weekly scientific reports, updates to websites, and social media posts. (The Washington Post) A new virtual retina, modeled on human retinas, has been developed to study the impact of retinal implants. The three-dimensional model simulates over 10,000 neurons. (Brain Stimulation) Trump has signed an executive order stating that it is the policy of the United States to recognize two sexes, male and female. The document defies decades of research into how human bodies grow and develop, STAT reports, and represents a dramatic failure to understand biology, according to a neuroscientist who studies the development of sex. (STAT) Attention, summer holiday planners: Biting sandflies in the Mediterranean region are transmitting Toscana virus at an increasing rate. The virus is a major cause of central nervous system disorders in the region. Italy saw a 2.6-fold increase in the number of reported infections between the 201621 period and 202223. (Eurosurveillance)
    0 Comments ·0 Shares ·7 Views
  • Whats next for robots
    www.technologyreview.com
    MIT Technology Reviews Whats Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of themhere. Jan Liphardt teaches bioengineering at Stanford, but to many strangers in Los Altos, California, he is a peculiar man they see walking a four-legged robotic dog down the street. Liphardt has been experimenting with building and modifying robots for years, and when he brings his dog out in public, he generally gets one of three reactions. Young children want to have one, their parents are creeped out, and baby boomers try to ignore it. "Theyll quickly walk by, he says, like, What kind of dumb new stuff is going on here? In the many conversations Ive had about robots, Ive also found that most people tend to fall into these three camps, though I dont see such a neat age division. Some are upbeat and vocally hopeful that a future is just around the corner in which machines can expertly handle much of what is currently done by humans, from cooking to surgery. Others are scared: of job losses, injuries, and whatever problems may come up as we try to live side by side. The final camp, which I think is the largest, is just unimpressed. Weve been sold lots of promises that robots will transform society ever since the first robotic arm was installed on an assembly line at a General Motors plant in New Jersey in 1961. Few of those promises have panned out so far. But this year, theres reason to think that even those staunchly in the bored camp will be intrigued by whats happening in the robot races. Heres a glimpse at what to keep an eye on. Humanoids are put to the test The race to build humanoid robots is motivated by the idea that the world is set up for the human form, and that automating that form could mean a seismic shift for robotics. It is led by some particularly outspoken and optimistic entrepreneurs, including Brett Adcock, the founder of Figure AI, a company making such robots thats valued at more than $2.6 billion (its begun testing its robots with BMW). Adcock recently told Time, Eventually, physical labor will be optional. Elon Musk, whose company Tesla is building a version called Optimus, has said humanoid robots will create a future where there is no poverty. A robotics company called Eliza Wakes Up is taking preorders for a $420,000 humanoid called, yes, Eliza. In June 2024, Agility Robotics sent a fleet of its Digit humanoid robots to GXO Logistics, which moves products for companies ranging from Nike to Nestl. The humanoids can handle most tasks that involve picking things up and moving them somewhere else, like unloading pallets or putting boxes on a conveyor. There have been hiccups: Highly polished concrete floors can cause robots to slip at first, and buildings need good Wi-Fi coverage for the robots to keep functioning. But charging is a bigger issue. Agilitys current version of Digit, with a 39-pound battery, can run for two to four hours before it needs to charge for one hour, so swapping out the robots for fresh ones is a common task on each shift. If there are a small number of charging docks installed, the robots can theoretically charge by shuffling among the docks themselves overnight when some facilities arent running, but moving around on their own can set off a buildings security system. Its a problem, says CTO Melonee Wise. Wise is cautious about whether humanoids will be widely adopted in workplaces. Ive always been a pessimist, she says. Thats because getting robots to work well in a lab is one thing, but integrating them into a bustling warehouse full of people and forklifts moving goods on tight deadlines is another task entirely. If 2024 was the year of unsettling humanoid product launch videos, this year we will see those humanoids put to the test, and well find out whether theyll be as productive for paying customers as promised. Now that Agilitys robots have been deployed in fast-paced customer facilities, its clear that small problems can really add up. Then there are issues with how robots and humans share spaces. In the GXO facility the two work in completely separate areas, Wise says, but there are cases where, for example, a human worker might accidentally leave something obstructing a charging station. That means Agilitys robots cant return to the dock to charge, so they need to alert a human employee to move the obstruction out of the way, slowing operations down. Its often said that robots dont call out sick or need health care. But this year, as fleets of humanoids arrive on the job, well begin to find out the limitations they do have. Learning from imagination The way we teach robots how to do things is changing rapidly. It used to be necessary to break their tasks down into steps with specifically coded instructions, but now, thanks to AI, those instructions can be gleaned from observation. Just as ChatGPT was taught to write through exposure to trillions of sentences rather than by explicitly learning the rules of grammar, robots are learning through videos and demonstrations. That poses a big question: Where do you get all these videos and demonstrations for robots to learn from? Nvidia, the worlds most valuable company, has long aimed to meet that need with simulated worlds, drawing on its roots in the video-game industry. It creates worlds in which roboticists can expose digital replicas of their robots to new environments to learn. A self-driving car can drive millions of virtual miles, or a factory robot can learn how to navigate in different lighting conditions. In December, the company went a step further, releasing what its calling a world foundation model. Called Cosmos, the model has learned from 20 million hours of videothe equivalent of watching YouTube nonstop since Rome was at war with Carthagethat can be used to generate synthetic training data. Heres an example of how this model could help in practice. Imagine you run a robotics company that wants to build a humanoid that cleans up hospitals. You can start building this robots brain with a model from Nvidia, which will give it a basic understanding of physics and how the world works, but then you need to help it figure out the specifics of how hospitals work. You could go out and take videos and images of the insides of hospitals, or pay people to wear sensors and cameras while they go about their work there. But those are expensive to create and time consuming, so you can only do a limited number of them, says Rev Lebaredian, vice president of simulation technologies at Nvidia. Cosmos can instead take a handful of those examples and create a three-dimensional simulation of a hospital. It will then start making changesdifferent floor colors, different sizes of hospital bedsand create slightly different environments. Youll multiply that data that you captured in the real world millions of times, Lebaredian says. In the process, the model will be fine-tuned to work well in that specific hospital setting. Its sort of like learning both from your experiences in the real world and from your own imagination (stipulating that your imagination is still bound by the rules of physics). Teaching robots through AI and simulations isnt new, but its going to become much cheaper and more powerful in the years to come. A smarter brain gets a smarter body Plenty of progress in robotics has to do with improving the way a robot senses and plans what to doits brain, in other words. Those advancements can often happen faster than those that improve a robots body, which determine how well a robot can move through the physical world, especially in environments that are more chaotic and unpredictable than controlled assembly lines. The military has always been keen on changing that and expanding the boundaries of whats physically possible. The US Navy has been testing machines from a company called Gecko Robotics that can navigate up vertical walls (using magnets) to do things like infrastructure inspections, checking for cracks, flaws, and bad welding on aircraft carriers. There are also investments being made for the battlefield. While nimble and affordable drones have reshaped rural battlefields in Ukraine, new efforts are underway to bring those drone capabilities indoors. The defense manufacturer Xtend received an $8.8 million contract from the Pentagon in December 2024 for its drones, which can navigate in confined indoor spaces and urban environments. These so-called loitering munitions are one-way attack drones carrying explosives that detonate on impact. These systems are designed to overcome challenges like confined spaces, unpredictable layouts, and GPS-denied zones, says Rubi Liani, cofounder and CTO at Xtend. Deliveries to the Pentagon should begin in the first few months of this year. Another initiativesparked in part by the Replicator project, the Pentagons plan to spend more than $1 billion on small unmanned vehiclesaims to develop more autonomously controlled submarines and surface vehicles. This is particularly of interest as the Department of Defense focuses increasingly on the possibility of a future conflict in the Pacific between China and Taiwan. In such a conflict, the drones that have dominated the war in Ukraine would serve little use because battles would be waged almost entirely at sea, where small aerial drones would be limited by their range. Instead, undersea drones would play a larger role. All these changes, taken together, point toward a future where robots are more flexible in how they learn, where they work, and how they move. Jan Liphardt from Stanford thinks the next frontier of this transformation will hinge on the ability to instruct robots through speech. Large language models ability to understand and generate text has already made them a sort of translator between Liphardt and his robot. We can take one of our quadrupeds and we can tell it, Hey, youre a dog, and the thing wants to sniff you and tries to bark, he says. Then we do one word changeYoure a cat. Then the thing meows and, you know, runs away from dogs. And we havent changed a single line of code. Correction: A previous version of this story incorrectly stated that the robotics company Eliza Wakes Up has ties to a16z.
    0 Comments ·0 Shares ·58 Views
  • OpenAI launches Operatoran agent that can use a computer for you
    www.technologyreview.com
    After weeks of buzz, OpenAI has released Operator, its first AI agent. Operator is a web app that can carry out simple online tasks in a browser, such as booking concert tickets or filling an online grocery order. The app is powered by a new model called Computer-Using AgentCUA (coo-ah), for shortbuilt on top of OpenAIs multimodal large language model GPT-4o. Operator is available today at operator.chatgpt.com to people in the US signed up with ChatGPT Pro, OpenAIs premium $200-a-month service. The company says it plans to roll the tool out to other users in the future. OpenAI claims that Operator outperforms similar rival tools, including Anthropics Computer Use (a version of Claude 3.5 Sonnet that can carry out simple tasks on a computer) and Google DeepMinds Mariner (a web-browsing agent built on top of Gemini 2.0). The fact that three of the worlds top AI firms have converged on the same vision of what agent-based models could be makes one thing clear. The battle for AI supremacy has a new frontierand its our computer screens. Moving from generating text and images to doing things is the right direction, says Ali Farhadi, CEO of the Allen Institute for AI (AI2). It unlocks business, solves new problems. Farhadi thinks that doing things on a computer screen is a natural first step for agents: It is constrained enough that the current state of the technology can actually work, he says. At the same time, its impactful enough that people might use it. (AI2 is working on its own computer-using agent, says Farhadi.) Dont believe the hype OpenAIs announcement also confirms one of two rumors that circled the internet this week. One predicted that OpenAI was about to reveal an agent-based app, after details about Operator were leaked on social media ahead of its release. The other predicted that OpenAI was about to reveal a new superintelligenceand that officials for newly inaugurated President Trump would be briefed on it. Could the two rumors be linked? OpenAI superfans wanted to know. Nope. OpenAI gave MIT Technology Review a preview of Operator in action yesterday. The tool is an exciting glimpse of large language models potential to do a lot more than answer questions. But Operator is an experimental work in progress. Its still early, it still makes mistakes, says Yash Kumar, a researcher at OpenAI. (As for the wild superintelligence rumors, lets leave that to OpenAI CEO Sam Altman to address: twitter hype is out of control again, he posted on January 20. pls chill and cut your expectations 100x!) Like Anthropics Computer Use and Google DeepMinds Mariner, Operator takes screenshots of a computer screen and scans the pixels to figure out what actions it can take. CUA, the model behind it, is trained to interact with the same graphical user interfacesbuttons, text boxes, menusthat people use when they do things online. It scans the screen, takes an action, scans the screen again, takes another action, and so on. That lets the model carry out tasks on most websites that a person can use. Traditionally the way models have used software is through specialized APIs, says Reiichiro Nakano, a scientist at OpenAI. (An API, or application programming interface, is a piece of code that acts as a kind of connector, allowing different bits of software to be hooked up to one another.) That puts a lot of apps and most websites off limits, he says: But if you create a model that can use the same interface that humans use on a daily basis, it opens up a whole new range of software that was previously inaccessible. CUA also breaks tasks down into smaller steps and tries to work through them one by one, backtracking when it gets stuck. OpenAI says CUA was trained with techniques similar to those used for its so-called reasoning models, o1 and o3. Operator can be instructed to search for campsites in Yosemite with good picnic tables.OPENAI OpenAI has tested CUA against a number of industry benchmarks designed to assess the ability of an agent to carry out tasks on a computer. The company claims that its model beats Computer Use and Mariner in all of them. For example, on OSWorld, which tests how well an agent performs tasks such as merging PDF files or manipulating an image, CUA scores 38.1% to Computer Uses 22.0% In comparison, humans score 72.4%. On a benchmark called WebVoyager, which tests how well an agent performs tasks in a browser, CUA scores 87%, Mariner 83.5%, and Computer Use 56%. (Mariner can only carry out tasks in a browser and therefore does not score on OSWorld.) For now, Operator can also only carry out tasks in a browser. OpenAI plans to make CUAs wider abilities available in the future via an API that other developers can use to build their own apps. This is how Anthropic released Computer Use in December. OpenAI says it has tested CUAs safety, using red teams to explore what happens when users ask it to do unacceptable tasks (such as research how to make a bioweapon), when websites contain hidden instructions designed to derail it, and when the model itself breaks down. Weve trained the model to stop and ask the user for information before doing anything with external side effects, says Casey Chu, another researcher on the team. Look! No hands To use Operator, you simply type instructions into a text box. But instead of calling up the browser on your computer, Operator sends your instructions to a remote browser running on an OpenAI server. OpenAI claims that this makes the system more efficient. Its another key difference between Operator, Computer Use and Mariner (which runs inside Googles Chrome browser on your own computer). Because its running in the cloud, Operator can carry out multiple tasks at once, says Kumar. In the live demo, he asked Operator to use OpenTable to book him a table for two at 6.30 p.m. at a restaurant called Octavia in San Francisco. Straight away, Operator opened up OpenTable and started clicking through options. As you can see, my hands are off the keyboard, he said. OpenAI is collaborating with a number of businesses, including OpenTable, StubHub, Instacart, DoorDash, and Uber. The nature of those collaborations is not exactly clear, but Operator appears to suggest preset websites to use for certain tasks. While the tool navigated dropdowns on OpenTable, Kumar sent Operator off to find four tickets for a Kendrick Lamar show on StubHub. While it did that, he pasted a photo of a handwritten shopping list and asked Operator to add the items to his Instacart. He waited, flicking between Operators tabs. If it needs help or if it needs confirmations, it'll come back to you with questions and you can answer it, he said. Kumar says he has been using Operator at home. It helps him stay on top of grocery shopping: I can just quickly click a photo of a list and send it to work, he says. Its also become a sidekick in his personal life. I have a date night every Thursday, says Kumar. So every Thursday morning, he instructs Operator to send him a list of five restaurants that have a table for two that evening. Of course, I could do that, but it takes me 10 minutes, he says. And I often forget to do it. With Operator, I can run the task with one click. Theres no burden of booking.
    0 Comments ·0 Shares ·58 Views
  • This is what might happen if the US withdraws from the WHO
    www.technologyreview.com
    On January 20, his first day in office, US president Donald Trump signed an executive order to withdraw the US from the World Health Organization. Ooh, thats a big one, he said as he was handed the document. The US is the biggest donor to the WHO, and the loss of this income is likely to have a significant impact on the organization, which develops international health guidelines, investigates disease outbreaks, and acts as an information-sharing hub for member states. But the US will also lose out. Its a very tragic and sad event that could only hurt the United States in the long run, says William Moss, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore. A little unfair? Trump appears to take issue with the amount the US donates to the WHO. He points out that it makes a much bigger contribution than China, a country with a population four times that of the US. It seems a little unfair to me, he said as he prepared to sign the executive order. It is true that the US is far and away the biggest financial supporter of the WHO. The US contributed $1.28 billion over the two-year period covering 2022 and 2023. By comparison, the second-largest donor, Germany, contributed $856 million in the same period. The US currently contributes 14.5% of the WHOs total budget. But its not as though the WHO sends a billion-dollar bill to the US. All member states are required to pay membership dues, which are calculated as a percentage of a countrys gross domestic product. For the US, this figure comes to $130 million. China pays $87.6 million. But the vast majority of the USs contributions to the WHO are made on a voluntary basisin recent years, the donations have been part of multibillion-dollar spending on global health by the US government. (Separately, the Bill and Melinda Gates Foundation contributed $830 million over 2022 and 2023.) Its possible that other member nations will increase their donations to help cover the shortfall left by the USs withdrawal. But it is not clear who will step upor what implications it might have to the structure of donations. Martin McKee, professor of European public health at the London School of Hygiene at Tropical Medicine, thinks it is unlikely that European members will increase their contributions by much. The Gulf states, China, India, Brazil, and South Africa, on the other hand, may be more likely to pay more. But again, it isnt clear how this will pan out, or whether any of these countries will expect greater influence over global health policy decisions as a result of increasing their donations. Deep impacts WHO funds are spent on a range of global health projectsprograms to eradicate polio, rapidly respond to health emergencies, improve access to vaccines and medicines, develop pandemic prevention strategies, and more. The loss of US funding is likely to have a significant impact on at least some of these programs. It is not clear which programs will lose funding, or when they will be affected. The US is required to give 12 months notice to withdraw its membership, but voluntary contributions might stop before that time is up. For the last few years, WHO member states have been negotiating a pandemic agreement designed to improve collaboration on preparing for future pandemics. The agreement is set to be finalized in 2025. But these discussions will be disrupted by the US withdrawal, says McKee. It will create confusion about how effective any agreement will be and what it will look like, he says. The agreement itself also wont make as big an impact without the US as a signatory, says Moss, who is also a member of a WHO vaccine advisory committee. The US would not be held to information-sharing standards that other countries could benefit from, and it might not be privy to important health information from other member nations. The global community might also lose out on the USs resources and expertise. Having a major country like the United States not be a part of that really undermines the value of any pandemic agreement, he says. McKee thinks that the loss of funding will also affect efforts to eradicate polio and to control outbreaks of mpox in the Democratic Republic of Congo, Uganda, and Burundi, which continue to report hundreds of cases per week. The virus has the potential to spread, including to the US, he points out. Moss is concerned about the potential for the spread of vaccine-preventable diseases. Robert F. Kennedy Jr., Trumps pick to lead the Department of Health and Human Services, is a prominent antivaccine advocate, and Moss worries about potential changes to vaccination-based health policies in the US. That, combined with a weakening of the WHOs ability to control disease outbreaks, could be a double whammy, he says: Were setting ourselves up for large measles disease outbreaks in the United States. At the same time, the US is up against another growing threat to public health: the circulation of bird flu on poultry and dairy farms. The US has seen outbreaks of the H5N1 virus on poultry farms in all states, and the virus has been detected in 928 dairy herds across 16 states, according to the US Centers for Disease Control and Prevention. There have been 67 reported human cases in the US, and one person has died. While we dont yet have evidence that the virus can spread between people, the US and other countries are already preparing for potential outbreaks. But this preparation relies on a thorough and clear understanding of what is happening on the ground. The WHO provides an important role in information sharingcountries report early signs of outbreaks to the agency, which then shares the information with its members. This kind of information not only allows countries to develop strategies to limit the spread of disease but can also allow them to share genetic sequences of viruses and develop vaccines. Member nations need to know whats happening in the US, and the US needs to know whats happening globally. Both of those channels of communication would be hindered by this, says Moss. As if all of that werent enough, the US also stands to suffer in terms of its reputation as a leader in global public health. By saying to the world We dont care about your health, it sends a message that is likely to reflect badly on it, says McKee. Its a classic lose-lose situation, he adds. Its going to hurt global health, says Moss. Its going to come back to bite us.
    0 Comments ·0 Shares ·59 Views
  • The Download: US WHO exit risks, and underground hydrogen
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. This is what might happen if the US withdraws from the WHO On January 20, his first day in office, US president Donald Trump signed an executive order to withdraw the US from the World Health Organization. The US is the biggest donor to the WHO, and the loss of this income is likely to have a significant impact on the organization, which develops international health guidelines, investigates disease outbreaks, and acts as an information-sharing hub for member states. But the US will also lose out. Read the full story.Jessica HamzelouWhy the next energy race is for underground hydrogen It might sound like something straight out of the 19th century, but one of the most cutting-edge areas in energy today involves drilling deep underground to hunt for materials that can be burned for energy. The difference is that this time, instead of looking for fossil fuels, the race is on to find natural deposits of hydrogen. In an age of lab-produced breakthroughs, it feels like something of a regression to go digging for resources. But looking underground could help meet energy demand while also addressing climate change. Read the full story.Casey Crownhart This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. Cattle burping remedies: 10 Breakthrough Technologies 2025 Companies are finally making real progress on one of the trickiest problems for climate change: cow burps. The worlds herds of cattle belch out methane as a by-product of digestion, as do sheep and goats. That powerful greenhouse gas makes up the single biggest source of livestock emissions, which together contribute 11% to 20% of the worlds total climate pollution, depending on the analysis. Enter the cattle burping supplement. DSM-Firmenich, a Netherlands-based conglomerate, says its Bovaer food supplement significantly reduces the amount of methane that cattle belchand its now available in dozens of countries. Read the full story.James Temple Cattle burping remedies is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Tech leaders are squabbling over Trumps new Stargate AI project Musk says its backers dont have enough money. Satya Nadella and Sam Altman disagree. (The Guardian)+ Its far from the first time Musk and Altman have clashed. (Insider $)+ The scrap could threaten Musks cordial relationship with Donald Trump. (FT $) 2 Trump has threatened to withhold aid from California He falsely claimed the states officials have been refusing to fight the fires with water. (WP $)+ A new fire broke out along the Ventura County border last night. (LA Times $)3 Redditors are weighing up banning links to X In response to Elon Musks salute. (404 Media)+ Not everyone agrees that the boycott will have the desired effect, though. (NYT $)4 How right-leaning male YouTubers helped to elect TrumpYoung men are responding favorably to content painting them as powerless. (Bloomberg $) 5 Why the US isnt handing out bird flu vaccines right now Its not currently being treated as a priority. (Wired $)+ How the US is preparing for a potential bird flu pandemic. (MIT Technology Review)6 Why you might be inadvertently following Trump on social media And why it may take a while for Meta to honor requests to unfollow. (NYT $)+ The company has denied secretly adding users to Trumps followers list. (Insider $)+ Handily enough, Trump has ordered the US government to stop pressuring social media firms. (WP $)7 Investors interest in weight-loss drugs is waningA disappointing trial and falling sales spell bad news for the sector. (FT $) + Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)8 A software engineer is trolling OpenAI with a new domain nameAnanay Arora registered OGOpenAI.com to redirect to a Chinese AI lab. (TechCrunch) 9 Macbeth is being turned into an interactive video game The Scottish play is being given a 21st century makeover. (The Verge) 10 Why measuring the quality of your sleep is so tough Not everyone agrees on what counts as good sleep, for a start. (New Scientist $)Quote of the day I acknowledge that this action is largely just virtue signalling. But if somebody starts popping off Nazi salutes at the presidential inauguration of a purported first world country, then virtue signalling is the least I can do. A Reddit moderator explains their decision to ban links to X in their forum after Elon Musks gestures at a post-inauguration rally this week, NBC News reports. The big story Welcome to Chula Vista, where police drones respond to 911 calls February 2023 In the skies above Chula Vista, California, where the police department runs a drone program, its not uncommon to see an unmanned aerial vehicle darting across the sky. Chula Vista is one of a dozen departments in the US that operate what are called drone-as-first-responder programs, where drones are dispatched by pilots, who are listening to live 911 calls, and often arrive first at the scenes of accidents, emergencies, and crimes, cameras in tow. But many argue that police forces adoption of drones is happening too quickly, without a well-informed public debate around privacy regulations, tactics, and limits. Theres also little evidence that drone policing reduces crime. Read the full story. Patrick Sisson We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + If you were struck by the beautiful scenery in The Brutalist, check out where it was filmed.+ This newly-unearthed, previously unreleased Tina Turner track is a banger.+ What to expect from the art world in the next 12 months.+ Let's take a look at this years potential runners and riders for the Oscars.
    0 Comments ·0 Shares ·61 Views
  • Why the next energy race is for underground hydrogen
    www.technologyreview.com
    It might sound like something straight out of the 19th century, but one of the most cutting-edge areas in energy today involves drilling deep underground to hunt for materials that can be burned for energy. The difference is that this time, instead of looking for fossil fuels, the race is on to find natural deposits of hydrogen. Hydrogen is already a key ingredient in the chemical industry and could be used as a greener fuel in industries from aviation and transoceanic shipping to steelmaking. Today, the gas needs to be manufactured, but theres some evidence that there are vast deposits underground. Ive been thinking about underground resources a lot this week, since Ive been reporting a story about a new startup, Addis Energy. The company is looking to use subsurface rocks, and the conditions down there, to produce another useful chemical: ammonia. In an age of lab-produced breakthroughs, it feels like something of a regression to go digging for resources, but looking underground could help meet energy demand while also addressing climate change. Its rare that hydrogen turns up in oil and gas operations, and for decades, the conventional wisdom has been that there arent large deposits of the gas underground. Hydrogen molecules are tiny, after all, so even if the gas was forming there, the assumption was that it would just leak out. However, there have been somewhat accidental discoveries of hydrogen over the decades, in abandoned mines or new well sites. There are reports of wells that spewed colorless gas, or flames that burned gold. And as people have looked more intentionally for hydrogen, theyve started to find it. As it turns out, hydrogen tends to build up in very different rocks from those that host oil and gas deposits. While fossil-fuel prospecting tends to focus on softer rocks, like organic-rich shale, hydrogen seems most plentiful in iron-rich rocks like olivine. The gas forms when chemical reactions at elevated temperature and pressure underground pull water apart. (Theres also likely another mechanism that forms hydrogen underground, called radiolysis, where radioactive elements emit radiation that can split water.) Some research has put the potential amount of hydrogen available at around a trillion tonsplenty to feed our demand for centuries, even if we ramp up use of the gas. The past few years have seen companies spring up around the world to try to locate and tap these resources. Theres an influx in Australia, especially the southern part of the country, which seems to have conditions that are good for making hydrogen. One startup, Koloma, has raised over $350 million to aid its geologic hydrogen exploration. There are so many open questions for this industry, including how much hydrogen is actually going to be accessible and economical to extract. Its not even clear how best to look for the gas today; researchers and companies are borrowing techniques and tools from the oil and gas industry, but there could be better ways. Its also unknown how this could affect climate change. Hydrogen itself may not warm the planet, but it can contribute indirectly to global warming by extending the lifetime of other greenhouse gases. Its also often found with methane, a super-powerful greenhouse gas that could do major harm if it leaks out of operations at a significant level. Theres also the issue of transportation: Hydrogen isnt very dense, and it can be difficult to store and move around. Deposits that are far away from the final customers could face high costs that might make the whole endeavor uneconomical. But this whole area is incredibly exciting, and researchers are working to better understand it. Some are looking to expand the potential pool of resources by pumping water underground to stimulate hydrogen production from rocks that wouldnt naturally produce the gas. Theres something fascinating to me about using the playbook of the oil and gas industry to develop an energy source that could actually help humanity combat climate change. It could be a strategic move to address energy demand, since a lot of expertise has accumulated over the roughly 150 years that weve been digging up fossil fuels. After all, its not digging thats the problemits emissions. Now read the rest of The Spark Related reading This story from Science, published in 2023, is a great deep dive into the world of so-called gold hydrogen. Give it a read for more on the history and geology here. For more on commercial efforts, specifically Koloma, give this piece from Canary Media a read. And for all the details on geologic ammonia and Addis Energy, check out my latest story here. Another thing Donald Trump officially took office on Monday and signed a flurry of executive orders. Here are a few of the most significant ones for climate: Trump announced his intention to once again withdraw from the Paris agreement. After a one-year waiting period, the worlds largest economy will officially leave the major international climate treaty. (New York Times) The president also signed an order that pauses lease sales for offshore wind power projects in federal waters. Its not clear how much the office will be able to slow projects that already have their federal permits. (Associated Press) Another executive order, titled Unleashing American Energy, broadly signals a wide range of climate and energy moves. One section ends the EV mandate. The US government doesnt have any mandates around EVs, but this bit is a signal of the administrations intent to roll back policies and funding that support adoption of these vehicles. There will almost certainly be court battles. (Wired) Another section pauses the disbursement of tens of billions of dollars for climate and energy. The spending was designated by Congress in two of the landmark laws from the Biden administration, the Bipartisan Infrastructure Law and the Inflation Reduction Act. Again, experts say we can likely expect legal fights. (Canary Media)Keeping up with climate The Chinese automaker BYD built more electric vehicles in 2024 than Tesla did. The data signals a global shift to cheaper EVs and the continued dominance of China in the EV market. (Washington Post) A pair of nuclear reactors in South Carolina could get a second chance at life. Construction halted at the VC Summer plant in 2017, $9 billion into the project. Now the sites owner wants to sell. (Wall Street Journal) Existing reactors are more in-demand than ever, as I covered in this story about whats next for nuclear power. (MIT Technology Review) In California, charging depots for electric trucks are increasingly choosing to cobble together their own power rather than waiting years to connect to the grid. These solar- and wind-powered microgrids could help handle broader electricity demand. (Canary Media) Wildfires in Southern California are challenging even wildlife that have adapted to frequent blazes. As fires become more frequent and intense, biologists worry about animals like mountain lions. (Inside Climate News) Experts warn that ash from the California wildfires could be toxic, containing materials like lead and arsenic. (Associated Press) Burning wood for power isnt necessary to help the UK meet its decarbonization goals, according to a new analysis. Biomass is a controversial green power source that critics say contributes to air pollution and harms forests. (The Guardian)
    0 Comments ·0 Shares ·63 Views
  • The Download: OpenAIs lobbying, and making ammonia below the Earths surface
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. OpenAI has upped its lobbying efforts nearly sevenfold OpenAI spent $1.76 million on government lobbying in 2024 and $510,000 in the last three months of the year alone, according to a new disclosure filed on Tuesdaya significant jump from 2023, when the company spent just $260,000 on Capitol Hill. The disclosure is a clear signal of the companys arrival as a political player, as its first year of serious lobbying ends and Republican control of Washington begins. While OpenAIs lobbying spending is still dwarfed by bigger tech players, the uptick comes as it and other AI companies are helping redraw the shape of AI policy. Read the full story.James ODonnell A new company plans to use Earth as a chemical reactor Forget massive steel tankssome scientists want to make chemicals with the help of rocks deep beneath Earths surface. New research shows that ammonia, a chemical crucial for fertilizer, can be produced from rocks at temperatures and pressures that are common in the subsurface. The research was published yesterday in Joule, and MIT Technology Review can exclusively report that a new company, called Addis Energy, has been founded to commercialize the process. Ammonia is used in most fertilizers and is a vital part of our modern food system. Its also being considered for use as a green fuel in industries like transoceanic shipping. The problem is that current processes used to make ammonia require a lot of energy and produce huge amounts of the greenhouse gases that cause climate change. Read the full story.Casey Crownhart There can be no winners in a US-China AI arms race Alvin Wang Graylin and Paul Triolo The United States and China are entangled in what many have dubbed an AI arms race. In the early days of this standoff, US policymakers drove an agenda centered on winning the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of beating China in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AIs scaling laws. But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. Read the full story.Meet the divers trying to figure out how deep humans can go Figuring out how the human body can withstand underwater pressure has been a problem for over a century, but a ragtag band of divers is experimenting with hydrogen to find out. This is our latest story to be turned into a MIT Technology Review Narrated podcast, whichwere publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Donald Trump has pardoned the creator of Silk Road Ross Ulbricht was sentenced to life in prison after being found guilty of conspiracy to commit drug trafficking, money laundering, and hacking. (BBC)+ The 40-year old has been in prison since 2015. (NYT $)+ Its a clear attempt to curry favor with the crypto community. (Bloomberg $)2 The US is embarking on a major AI data center pushOpenAI, SoftBank, and Oracle will create $100 billion in computing infrastructure. (NYT $) + Sam Altman says the project will facilitate the birth of AGI in America. (Insider $)3 What Trumps executive orders mean for you From a national energy emergency to pausing wind projects. (Fast Company $)+ The new President also officially established DOGE. (Ars Technica)4 YouTuber Mr Beast is considering buying TikTok His lawyer insists hes deadly serious. (CNN)+ What is the true value of TikTok, exactly? (The Information $)+ Trump is open to Elon Musk bidding for ownership too. (The Guardian)5 Microsoft will foot the bill to restore part of the Amazon rainforest In exchange for hundreds of millions of dollars worth of carbon credits. (FT $)+ Google, Amazon and the problem with Big Techs climate claims. (MIT Technology Review) 6 Google sold AI tools to Israels military in the wake of the Hamas attackIn stark contrast to its public stance distancing itself from Israels security apparatus. (WP $) 7 Inside the fight raging over NASAs first deep space stationSome experts argue we should start building living quarters directly on the moon instead. (Undark) + Heres what an exploding rocket looks like. (New Scientist $)+ Whats next for NASAs giant moon rocket? (MIT Technology Review) 8 How the Parcae satellite program helped to win the Cold WarAnd ushered in a new age of eavesdropping in the process. (IEEE Spectrum) 9 Startup founders are hustling for deals at inauguration parties Networking is so back, baby. (TechCrunch)+ How a Greenwich Village bar became a MAGA mecca. (NY Mag $)10 How AI could revamp treatment for snake bites Courtesy of a recent Nobel chemistry prize winner. (Economist $)Quote of the day Its not at all like being an employee. Theres nobody you can talk to. Everything is automated. A gig economy driver tells the Guardian about his frustration in navigating the platforms apps. The big story How tactile graphics can help end image poverty June 2023 Chancey Fleet In 2020, in the midst of the pandemic lockdown, my husband and I bought a house in Brooklyn and decided to rebuild the interior. He taught me a few key architectural symbols and before long I was drawing my own concepts, working toward a shared vision of the home we eventually designed. Its a commonplace story, except for one key factor: Im blind, and Ive made it my mission to ensure that blind New Yorkers can create and explore images. As a blind tech educator, its my joband my passionto introduce blind and low-vision patrons to tools that help them move through daily life with autonomy and ease. Read the full story. We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + To prevent sore shoulders and bad backs, it helps to know the muscles that cause them.+ Its time to join the crispy gnocchi club.+ If youre lucky enough to win an Academy Award, dont even think about trying to sell it.+ Space-age bachelor pad music looks like a pretty great genre to me.
    0 Comments ·0 Shares ·46 Views
  • Implementing responsible AI in the generative age
    www.technologyreview.com
    Many organizations have experimented with AI, but they havent always gotten the full value from their investments. A host of issues standing in the way center on the accuracy, fairness, and securityof AI systems. In response, organizations are actively exploring the principles of responsible AI: the idea that AI systems must be fair, transparent, and beneficial to society for it to be widely adopted. When responsible AI is done right, it unlocks trust and therefore customer adoption of enterprise AI. According to the US National Institute of Standards and Technology the essential building blocks Validity and reliability Safety Security and resiliency Accountability and transparency Explainability and interpretability Privacy Fairness with mitigation of harmful bias DOWNLOAD THE REPORT To investigate the current landscape of responsible AI across the enterprise, MIT Technology Review Insights surveyed 250 business leaders about how theyre implementing principles that ensure AI trustworthiness. The poll found that responsible AI is important to executives, with 87% of respondents rating it a high or medium priority for their organization. A majority of respondents (76%) also say that responsible AI is a high or medium priority specifically for creating a competitive advantage. But relatively few have figured out how to turn these ideas into reality. We found that only 15% of those surveyed felt highly prepared to adopt effective responsible AI practices, despite the importance they placed on them. Putting responsible AI into practice in the age of generative AI requires a series of best practices that leading companies are adopting. These practices can include cataloging AI models and data and implementing governance controls. Companies may benefit from conducting rigorous assessments, testing, and audits for risk, security, and regulatory compliance. At the same time, they should also empower employees with training at scale and ultimately make responsible AI a leadership priority to ensure their change efforts stick. We all know AI is the most influential change in technology that weve seen, but theres a huge disconnect, says Steven Hall, chief AI officer and president of EMEA at ISG, a global technology research and IT advisory firm. Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and the funding allocated to responsible AI are well below where they need to be given its criticality to the organization. Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
    0 Comments ·0 Shares ·58 Views
  • OpenAI ups its lobbying efforts nearly seven-fold
    www.technologyreview.com
    OpenAI spent $1.76 million on government lobbying in 2024 and $510,000 in the last three months of the year alone, according to a new disclosure filed on Tuesdaya significant jump from 2023, when the company spent just $260,000 on Capitol Hill. The company also disclosed a new in-house lobbyist, Meghan Dorn, who worked for five years for Senator Lindsey Graham and started at OpenAI in October. The filing also shows activity related to two new pieces of legislation in the final months of the year: the Houses AI Advancement and Reliability Act, which would set up a government center for AI research, and the Senates Future of Artificial Intelligence Innovation Act, which would create shared benchmark tests for AI models. OpenAI did not respond to questions about its lobbying efforts. But perhaps more important, the disclosure is a clear signal of the companys arrival as a political player, as its first year of serious lobbying ends and Republican control of Washington begins. While OpenAIs lobbying spending is still dwarfed by its peersMeta tops the list of Big Tech spenders, with more than $24 million in 2024the uptick comes as it and other AI companies have helped redraw the shape of AI policy. For the past few years, AI policy has been something like a whack-a-mole response to the risks posed by deepfakes and misinformation. But over the last year, AI companies have started to position the success of the technology as pivotal to national security and American competitiveness, arguing that the government must therefore support the industrys growth. As a result, OpenAI and others now seem poised to gain access to cheaper energy, lucrative national security contracts, and a more lax regulatory environment thats unconcerned with the minutiae of AI safety. While the big players seem more or less aligned on this grand narrative, messy divides on other issues are still threatening to break through the harmony on display at President Trumps inauguration this week. AI regulation really began in earnest after ChatGPT launched in November 2022. At that point, a lot of the conversation was about responsibility, says Liana Keesing, campaigns manager for technology reform at Issue One, a democracy nonprofit that tracks Big Techs influence. Companies were asked what theyd do about sexually abusive deepfake images and election disinformation. Sam Altman did a very good job coming in and painting himself early as a supporter of that process, Keesing says. OpenAI started its official lobbying effort around October 2023, hiring Chan Parka onetime Senate Judiciary Committee counsel and Microsoft lobbyistto lead the effort. Lawmakers, particularly then Senate majority leader Chuck Schumer, were vocal about wanting to curb these particular harms; OpenAI hired Schumers former legal counsel, Reginald Babin, as a lobbyist, according to data from OpenSecrets. This past summer, the company hired the veteran political operative Chris Lehane as its head of global policy. OpenAIs previous disclosures confirm that the companys lobbyists subsequently focused much of last year on legislation like the No Fakes Act and the Protect Elections from Deceptive AI Act. The bills did not materialize into law. But as the year went on, the regulatory goals of AI companies began to change. One of the biggest shifts that weve seen, Keesing says, is that theyve really started to focus on energy. In September, Altman, along with leaders from Nvidia, Anthropic, and Google, visited the White House and pitched the vision that US competitiveness in AI will depend on subsidized energy infrastructure to train the best models. Altman proposed to the Biden administration the construction of multiple five-gigawatt data centers, which would each consume as much electricity as New York City. Around the same time, companies like Meta and Microsoft started to say that nuclear energy will provide the path forward for AI, announcing deals aimed at firing up new nuclear power plants. It seems likely OpenAIs policy team was already planning for this particular shift. In April, the company hired lobbyist Matthew Rimkunas, who worked for Bill Gatess sustainable energy effort Breakthrough Energies and, before that, spent 16 years working for Senator Graham; the South Carolina Republican serves on the Senate subcommittee that manages nuclear safety. This new AI energy race is inseparable from the positioning of AI as essential for national security and US competitiveness with China. OpenAI laid out its position in a blog post in October, writing, AI is a transformational technology that can be used to strengthen democratic values or to undermine them. Thats why we believe democracies should continue to take the lead in AI development. Then in December, the company went a step further and reversed its policy against working with the military, announcing it would develop AI models with the defense-tech company Anduril to help take down drones around military bases. That same month, Sam Altman said during an interview with The Free Press that the Biden administration was not that effective in shepherding AI: The things that I think should have been the administrations priorities, and I hope will be the next administrations priorities, are building out massive AI infrastructure in the US, having a supply chain in the US, things like that. That characterization glosses over the CHIPS Act, a $52 billion stimulus to the domestic chips industry that is, at least on paper, aligned with Altmans vision. (It also preceded an executive order Biden issued just last week, to lease federal land to host the type of gigawatt-scale data centers that Altman had been asking for.) Intentionally or not, Altmans posture aligned him with the growing camaraderie between President Trump and Silicon Valley. Mark Zuckerberg, Elon Musk, Jeff Bezos, and Sundar Pichai all sat directly behind Trumps family at the inauguration on Monday, and Altman also attended. Many of them had also made sizable donations to Trumps inaugural fund, with Altman personally throwing in $1 million. Its easy to view the inauguration as evidence that these tech leaders are aligned with each other, and with other players in Trumps orbit. But there are still some key dividing lines that will be worth watching. Notably, theres the clash over H-1B visas, which allow many noncitizen AI researchers to work in the US. Musk and Vivek Ramaswamy (who is, as of this week, no longer a part of the so-called Department of Government Efficiency) have been pushing for that visa program to be expanded. This sparked backlash from some allies of the Trump administration, perhaps most loudly Steve Bannon. Another fault line is the battle between open- and closed-source AI. Google and OpenAI prevent anyone from knowing exactly whats in their most powerful models, often arguing that this keeps them from being used improperly by bad actors. Musk has sued OpenAI and Microsoft over the issue, alleging that closed-source models are antithetical to OpenAIs hybrid nonprofit structure. Meta, whose Llama model is open-source, recently sided with Musk in that lawsuit. Venture capitalist and Trump ally Marc Andreessen echoed these criticisms of OpenAI on X just hours after the inauguration. (Andreessen has also said that making AI models open-source makes overbearing regulations unnecessary.) Finally, there are the battles over bias and free speech. The vastly different approaches that social media companies have taken to moderating contentincluding Metas recent announcement that it would end its US fact-checking programraise questions about whether the way AI models are moderated will continue to splinter too. Musk has lamented what he calls the wokeness of many leading models, and Andreessen said on Tuesday that Chinese LLMs are much less censored than American LLMs (though thats not quite true, given that many Chinese AI models have government-mandated censorship in place that forbids particular topics). Altman has been more equivocal: No two people are ever going to agree that one system is perfectly unbiased, he told The Free Press. Its only the start of a new era in Washington, but the White House has been busy. It has repealed many executive orders signed by President Biden, including the landmark order on AI that imposed rules for government use of the technology (while it appears to have kept Bidens order on leasing land for more data centers). Altman is busy as well. OpenAI, Oracle, and SoftBank reportedly plan to spend up to $500 billion on a joint venture for new data centers; the project was announced by President Trump, with Altman standing alongside. And according to Axios, Altman will also be part of a closed-door briefing with government officials on January 30, reportedly about OpenAIs development of a powerful new AI agent.
    0 Comments ·0 Shares ·61 Views
  • A new company plans to use Earth as a chemical reactor
    www.technologyreview.com
    Forget massive steel tankssome scientists want to make chemicals with the help of rocks deep beneath Earths surface. New research shows that ammonia, a chemical crucial for fertilizer, can be produced from rocks at temperatures and pressures that are common in the subsurface. The research was published today in Joule, and MIT Technology Review can exclusively report that a new company, called Addis Energy, was founded to commercialize the process. Ammonia is used in most fertilizers and is a vital part of our modern food system. Its also being considered for use as a green fuel in industries like transoceanic shipping. The problem is that current processes used to make ammonia require a lot of energy and produce huge amounts of the greenhouse gases that cause climate changeover 1% of the global total. The new study finds that the planets internal conditions can be used to produce ammonia in a much cleaner process. Earth can be a factory for chemical production, says Iwnetim Abate, an MIT professor and author of the new study. This idea could be a major change for the chemical industry, which today relies on huge facilities running reactions at extremely high temperatures and pressures to make ammonia. The key ingredients for ammonia production are sources of nitrogen and hydrogen. Much of the focus on cleaner production methods currently lies in finding new ways to make hydrogen, since that chemical makes up the bulk of ammonias climate footprint, says Patrick Molloy, a principal at the nonprofit research agency Rocky Mountain Institute. Recently, researchers and companies have located naturally occurring deposits of hydrogen underground. Iron-rich rocks tend to drive reactions that produce the gas, and these natural deposits could provide a source of low-cost, low-emissions hydrogen. While geologic hydrogen is still in its infancy as an industry, some researchers are hoping to help the process along by stimulating production of hydrogen underground. With the right rocks, heat, and a catalyst, you can produce hydrogen cheaply and without emitting large amounts of climate pollution. Hydrogen can be difficult to transport, though, so Abate was interested in going one step further by letting the conditions underground do the hard work in powering chemical reactions that transform hydrogen and nitrogen into ammonia. As you dig, you get heat and pressure for free, he says. To test out how this might work, Abate and his team crushed up iron-rich minerals and added nitrates (a nitrogen source), water (a hydrogen source), and a catalyst to help reactions along in a small reactor in the lab. They found that even at relatively low temperatures and pressures, they could make ammonia in a matter of hours. If the process were scaled up, the researchers estimate, one well could produce 40,000 tons of ammonia per day. While the reactions tend to go faster at high temperature and pressure, the researchers found that ammonia production could be an economically viable process even at 130 C (266 F) and a little over two atmospheres of pressure, conditions that would be accessible at depths reachable with existing drilling technology. While the reactions work in the lab, theres a lot of work to do to determine whether, and how, the process might actually work in the field. One thing the team will need to figure out is how to keep reactions going, because in the reaction that forms ammonia, the surface of the iron-rich rocks will be oxidized, leaving them in a state where they cant keep reacting. But Abate says the team is working on controlling how thick the unusable layer of rock is, and its composition, so the chemical reactions can continue. To commercialize this work, Abate is cofounding a company called Addis Energy with $4.25 million in pre-seed funds from investors including Engine Ventures. His cofounders include Michael Alexander and Charlie Mitchell (who have both spent time in the oil and gas industry) and Yet-Ming Chiang, an MIT professor and serial entrepreneur. The company will work on scaling up the research, including finding potential sites with the geological conditions to produce ammonia underground. The good news for scale-up efforts is that much of the necessary technology already exists in oil and gas operations, says Alexander, Addiss CEO. A field-deployed system will involve drilling, pumping fluid down into the ground, and extracting other fluids from beneath the surface, all very common operations in that industry. Theres novel chemistry thats wrapped in an oil and gas package, he says. The team will also work on refining cost estimates for the process and gaining a better understanding of safety and sustainability, Abate says. Ammonia is a toxic industrial chemical, but its common enough for there to be established procedures for handling, storing, and transporting it, says RMIs Molloy. Judging from the researchers early estimates, ammonia produced with this method could cost up to $0.55 per kilogram. Thats more than ammonia produced with fossil fuels today ($0.40/kg), but the technique would likely be less expensive than other low-emissions methods of producing the chemical. Tweaks to the process, including using nitrogen from the air instead of nitrates, could help cut costs further, even as low as $0.20/kg. New approaches to making ammonia could be crucial for climate efforts. Its a chemical thats essential to our way of life, says Karthish Manthiram, a professor at Caltech who studies electrochemistry, including alternative ammonia production methods. The teams research appears to be designed with scalability in mind from the outset, and using Earth itself as a reactor is the kind of thinking needed to accelerate the long-term journey to sustainable chemical production, Manthiram adds. While the company focuses on scale-up efforts, theres plenty of fundamental work left for Abate and other labs to do to understand whats going on during the reactions at the atomic level, particularly at the interface between the rocks and the reacting fluid. Research in the lab is exciting, but its only the first step, Abate says. The next one is seeing if this actually works in the field.
    0 Comments ·0 Shares ·49 Views
  • Therecan beno winners in a US-China AI arms race
    www.technologyreview.com
    The United States and China are entangled in what many have dubbed an AI arms race. In the early days of this standoff, US policymakers drove an agenda centered on winning the race, mostly from an economic perspective. In recent months, leading AI labs such asOpenAIandAnthropicgot involved in pushing the narrative of beating China in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the UScanwin in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AIsscaling laws. But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achievenear equivalent resultswhile using only a small fraction of the compute resources available to the leading Western labs. The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employedchokepoint tacticsto limit Chinas access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire. Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls tohold back Chinas progress on AI and advanced semiconductors is a fools errand.Ironically, the unprecedented export control packages targeting Chinas semiconductor and AI sectors have unfolded alongside tentativebilateral and multilateral engagementsto establish AI safety standards and governance frameworkshighlighting a paradoxical desire of both sides to compete and cooperate. When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced modelsinstead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat. It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severeundermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole. Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. Arecent reportfrom the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This winner takes all logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of aManhattan Project for AIand redirection of US military resources fromUkraine toward China. Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recentlyposted on January 17 that hed restarted direct dialoguewith Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be partners and friends. The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory. The promise of AI for good Western mass media usually focuses on attention-grabbing issues described in terms like the existential risks of evil AI. Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more likecollaborative acceleration. It is important to note the significant difference betweenthe way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there well need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) wereborn or educatedin China, according toindustry studies.Its hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypotheticalthey could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance. Our recommendations for policymakers: Reduce national security dominance over AI policy.Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 2.Promote bilateral and multilateral AI governance.Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all. 3.Expand investment in detection and mitigation of AI misuse.The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally. 4.Create incentives for collaborative AI research.Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to theCERN for AIwill bring much more value to the world, and a peaceful end, than aManhattan Project for AI,which is being promoted by many in Washington today. 5.Establish trust-building measures.Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship. 6.Support the development of a global AI safety coalition.A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem. 7.Shift the focus toward AI for global challenges.It is crucial that the worlds two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. The opportunity to harness AI for the common good is a chance the world cannot afford to miss. Alvin Wang Graylin Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the companys China president from 2016 to 2023. He is the author ofOur Next Reality. Paul Triolo Paul Triolo is apartner for China and technology policy leadat DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.
    0 Comments ·0 Shares ·61 Views
  • The Download: AI for cancer diagnosis, and HIV prevention
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Why its so hard to use AI to diagnose cancer Finding and diagnosing cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes. They look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread. Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis.Were starting to see lots of new efforts to build such a modelat least seven attempts in the last year alone. But they all remain experimental. What will it take to make them good enough to be used in the real world? Read the full story. James O'Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Long-acting HIV prevention meds: 10 Breakthrough Technologies 2025 In June 2024, results from a trial of a new medicine to prevent HIV were announcedand they were jaw-dropping. Lenacapavir, a treatment injected once every six months, protected over 5,000 girls and women in Uganda and South Africa from getting HIV. And it was 100% effective. So far, the FDA has approved the drug only for people who already have HIV thats resistant to other treatments. But its producer Gilead has signed licensing agreements with manufacturers to produce generic versions for HIV prevention in 120 low-income countries. The United Nations has set a goal of ending AIDS by 2030. Its ambitious, to say the least: We still see over 1 million new HIV infections globally every year. But we now have the medicines to get us there. What we need is access. Read the full story. Jessica Hamzelou Long-acting HIV prevention meds is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Donald Trump signed an executive order delaying TikToks ban Parent company ByteDance has 75 days to reach a deal to stay live in the US. (WP $)+ China appears to be keen to keep the platform operating, too. (WSJ $)2 Neo-Nazis are celebrating Elon Musks salutes Theyre thrilled by the two Nazi-like salutes he gave at a post-inauguration rally. (Wired $)+ Whether the gestures were intentional or not, extremists have chosen to interpret them that way. (Rolling Stone $)+ MAGA is all about granting unchecked power to the already powerful. (Vox)+ How tech billionaires are hoping Trump will reward them for their support. (NY Mag $) 3 Trump is withdrawing the US from the World Health OrganizationHes accused the agency of mishandling the covid 19 pandemic. (Ars Technica)+ He first tried to leave the WHO in 2020, but failed to complete it before he left office. (Reuters) + Trump is also working on pulling the US out of the Paris climate agreement. (The Verge)4 Meta will keep using fact checkers outside the USfor now It wants to see how its crowdsourced fact verification system works in America before rolling it out further. (Bloomberg $)5 Startup Friend has delayed shipments of its AI necklace Customers are unlikely to receive their pre-orders before Q3. (TechCrunch)+ Introducing: The AI Hype Index. (MIT Technology Review)6 This sophisticated tool can pinpoint where a photo was taken in seconds Members of the public have been trying to use GeoSpy for nefarious means for months. (404 Media)7 Los Angeles is covered in ashAnd it could take years before it fully disappears. (The Atlantic $) 8 Singapore is turning to AI companions to care for its eldersRobots are filling the void left by an absence of human nurses. (Rest of World) + Inside Japans long experiment in automating elder care. (MIT Technology Review)9 The lost art of using a pen Typing and swiping are replacing good old fashioned paper and ink. (The Guardian)10 LinkedIn is getting humorous Posts are getting more personal, with a decidedly comedic bent. (FT $) Quote of the day Its been really beautiful to watch how two communities that would be considered polar opposites have come together. Khalil Bowens, a content creator based in Los Angeles, reflects on the influx of Americans joining Chinese social media app Xiaohongshu to the Wall Street Journal. The big story Inside the messy ethics of making war with machines August 2023 In recent years, intelligent autonomous weaponsweapons that can select and fire upon targets without any human inputhave become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever. Intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. However, these systems have become sophisticated enough to raise novel questionsones that are surprisingly tricky to answer. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story. Arthur Holland Michel We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Baby octopuses arent just cutethey can change color from the moment theyre born + Nintendo artist Takaya Imamura played a key role in making the company the gaming juggernaut it is today.+ David Lynch wasnt just a master of imagery, the way he deployed music to creep us out was second to none.+ Only got a bag of rice in the cupboard? No problem.
    0 Comments ·0 Shares ·62 Views
  • Why its so hard to use AI to diagnose cancer
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Peering into the body to find and diagnose cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes and look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread. In theory,artificial intelligence should be great at helping out. Our job is pattern recognition, says Andrew Norgan, a pathologist and medical director of the Mayo Clinics digital pathology platform. We look at the slide and we gather pieces of information that have been proven to be important. Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis. Were starting to see lots of new efforts to build such a modelat least seven attempts in the last year alonebut they all remain experimental. Details about the latest effort to build such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, but it reveals much about the challenges of bringing such a tool to real clinical settings. The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the models predictions are compared with the correct answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest score for categorizing cancerous colorectal tissue, reaching the same conclusion as human pathologists 97.1% of the time. For another task, thoughclassifying tumors from prostate cancer biopsiesAtlas beat the other models high scores with a score of just 70.5%. Its average across nine benchmarks showed that it got the same answers as human experts 84.6% of the time. Lets think about what this means. The best way to know whats happening to cancerous cells in tissues is to have a sample examined by a pathologist, so thats the performance that AI models are measured against. The best models are approaching humans in particular detection tasks but lagging behind in many others. So how good does a model have to be to be clinically useful? Ninety percent is probably not good enough. You need to be even better, says Carlo Bifulco, chief medical officer at Providence Genomics and co-creator of GigaPath, one of the other AI pathology models examined in the Mayo Clinic study. But, Bifulco says, AI models that dont score perfectly can still be useful in the short term, and could potentially help pathologists speed up their work and make diagnoses more quickly. What obstacles are getting in the way of better performance? Problem number one is training data. Fewer than 10% of pathology practices in the US are digitized, Norgan says. That means tissue samples are placed on slides and analyzed under microscopes, and then stored in massive registries without ever being documented digitally. Though European practices tend to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to train on, theres still not a ton to work with. Without diverse data sets, AI models struggle to identify the wide range of abnormalities that human pathologists have learned to interpret. That includes for rare diseases, says Maximilian Alber, cofounder and CTO of Aignostics. Scouring the publicly available databases for tissue samples of particularly rare diseases, youll find 20 samples over 10 years, he says. Around 2022, the Mayo Clinic foresaw that this lack of training data would be a problem. It decided to digitize all of its own pathology practices moving forward, along with 12 million slides from its archives dating back decades (patients had consented to their being used for research). It hired a company to build a robot that began taking high-resolution photos of the tissues, working through up to a million samples per month. From these efforts, the team was able to collect the 1.2 million high-quality samples used to train the Mayo model. This brings us to problem number two for using AI to spot cancer. Tissue samples from biopsies are tinyoften just a couple of millimeters in diameterbut are magnified to such a degree that digital images of them contain more than 14 billion pixels. That makes them about 287,000 times larger than images used to train the best AI image recognition models to date. That obviously means lots of storage costs and so forth, says Hoifung Poon, an AI researcher at Microsoft who worked with Bifulco to create GigaPath, which was featured in Nature Thirdly, theres the question of which benchmarks are most important for a cancer-spotting AI model to perform well on. The Atlas researchers tested their model in the challenging domain of molecular-related benchmarks, which involves trying to find clues from sample tissue images to guess whats happening on a molecular level. Heres an example: Your bodys mismatch repair genes are of particular concern for cancer, because they catch errors made when your DNA gets replicated. If these errors arent caught, they can drive the development and progression of cancer. Some pathologists might tell you they kind of get a feeling when they think somethings mismatch-repair deficient based on how it looks, Norgan says. But pathologists dont act on that gut feeling alone. They can do molecular testing for a more definitive answer. What if instead, Norgan says, we can use AI to predict whats happening on the molecular level? Its an experiment: Could the AI model spot underlying molecular changes that humans cant see? Generally no, it turns out. Or at least not yet. Atlass average for the molecular testing was 44.9%. Thats the best performance for AI so far, but it shows this type of testing has a long way to go. Bifulco says Atlas represents incremental but real progress. My feeling, unfortunately, is that everybody's stuck at a similar level, he says. We need something different in terms of models to really make dramatic progress, and we need larger data sets. Now read the rest of The Algorithm Deeper Learning OpenAI has created an AI model for longevity science AI has long had its fingerprints on the science of protein folding. But OpenAI now says its created a model that can engineer proteins, turning regular cells into stem cells. That goal has been pursued by companies in longevity science, because stem cells can produce any other tissue in the body and, in theory, could be a starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells. Why it matters: The work was a product of OpenAIs collaboration with the longevity company Retro Labs, in which Sam Altman invested $180 million. It represents OpenAIs first model focused on biological data and its first public claim that its models can deliver scientific results. The AI model reportedly engineered more effective proteins, and more quickly, than the companys scientists could. But outside scientists cant evaluate the claims until the studies have been published. Read more from Antonio Regalado. Bits and Bytes What we know about the TikTok ban The popular video app went dark in the United States late Saturday and then came back around noon on Sunday, even as a law banning it took effect. (The New York Times) Why Meta might not end up like X X lost lots of advertising dollars as Elon Musk changed the platform's policies. But Facebook and Instagrams massive scale make them hard platforms for advertisers to avoid. (Wall Street Journal) What to expect from Neuralink in 2025 More volunteers will get Elon Musks brain implant, but dont expect a product soon. (MIT Technology Review) A former fact-checking outlet for Meta signed a new deal to help train AI models Meta paid media outlets like Agence France-Presse for years to do fact checking on its platforms. Since Meta announced it would shutter those programs, Europes leading AI company, Mistral, has signed a deal with AFP to use some of its content in its AI models. (Financial Times) OpenAIs AI reasoning model thinks in Chinese sometimes, and no one really knows why While it comes to its response, the model often switches to Chinese, perhaps a reflection of the fact that many data labelers are based in China. (Tech Crunch)
    0 Comments ·0 Shares ·75 Views
  • The second wave of AI coding is here
    www.technologyreview.com
    Ask people building generative AI what generative AI is good for right nowwhat theyre really fired up aboutand many will tell you: coding. Thats something thats been very exciting for developers, Jared Kaplan, chief scientist at Anthropic, told MIT Technology Review this month: Its really understanding whats wrong with code, debugging it. Copilot, a tool built on top of OpenAIs large language models and launched by Microsoft-backed GitHub in 2022, is now used by millions of developers around the world. Millions more turn to general-purpose chatbots like Anthropics Claude, OpenAIs ChatGPT, and Google DeepMinds Gemini for everyday help. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers, Alphabet CEO Sundar Pichai claimed on an earnings call in October: This helps our engineers do more and move faster. Expect other tech companies to catch up, if they havent already. Its not just the big beasts rolling out AI coding tools. A bunch of new startups have entered this buzzy market too. Newcomers such as Zencoder, Merly, Cosine, Tessl (valued at $750 million within months of being set up), and Poolside (valued at $3 billion before it even released a product) are all jostling for their slice of the pie. It actually looks like developers are willing to pay for copilots, says Nathan Benaich, an analyst at investment firm Air Street Capital: And so code is one of the easiest ways to monetize AI. Such companies promise to take generative coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, like most existing tools, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves. But theres more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence (AGI), the hypothetical superhuman technology that a number of top firms claim to have in their sights. The first time we will see a massively economically valuable activity to have reached human-level capabilities will be in software development, says Eiso Kant, CEO and cofounder of Poolside. (OpenAI has already boasted that its latest o3 model beat the companys own chief scientist in a competitive coding challenge.) Welcome to the second wave of AI coding. Correct code Software engineers talk about two types of correctness. Theres the sense in which a programs syntax (its grammar) is correctmeaning all the words, numbers, and mathematical operators are in the right place. This matters a lot more than grammatical correctness in natural language. Get one tiny thing wrong in thousands of lines of code and none of it will run. The first generation of coding assistants are now pretty good at producing code thats correct in this sense. Trained on billions of pieces of code, they have assimilated the surface-level structures of many types of programs. But theres also the sense in which a programs function is correct: Sure, it runs, but does it actually do what you wanted it to? Its that second level of correctness that the new wave of generative coding assistants are aiming forand this is what will really change the way software is made. Large language models can write code that compiles, but they may not always write the program that you wanted, says Alistair Pullen, a cofounder of Cosine. To do that, you need to re-create the thought processes that a human coder would have gone through to get that end result. The problem is that the data most coding assistants have been trained onthe billions of pieces of code taken from online repositoriesdoesnt capture those thought processes. It represents a finished product, not what went into making it. Theres a lot of code out there, says Kant. But that data doesnt represent software development. What Pullen, Kant, and others are finding is that to build a model that does a lot more than autocompleteone that can come up with useful programs, test them, and fix bugsyou need to show it a lot more than just code. You need to show it how that code was put together. In short, companies like Cosine and Poolside are building models that dont just mimic what good code looks likewhether it works well or notbut mimic the process that produces such code in the first place. Get it right and the models will come up with far better code and far better bug fixes. Breadcrumbs But you first need a data set that captures that processthe steps that a human developer might take when writing code. Think of these steps as a breadcrumb trail that a machine could follow to produce a similar piece of code itself. Part of that is working out what materials to draw from: Which sections of the existing codebase are needed for a given programming task? Context is critical, says Zencoder founder Andrew Filev. The first generation of tools did a very poor job on the context, they would basically just look at your open tabs. But your repo [code repository] might have 5000 files and theyd miss most of it. Zencoder has hired a bunch of search engine veterans to help it build a tool that can analyze large codebases and figure out what is and isnt relevant. This detailed context reduces hallucinations and improves the quality of code that large language models can produce, says Filev: We call it repo grokking. Cosine also thinks context is key. But it draws on that context to create a new kind of data set. The company has asked dozens of coders to record what they were doing as they worked through hundreds of different programming tasks. We asked them to write down everything, says Pullen: Why did you open that file? Why did you scroll halfway through? Why did you close it? They also asked coders to annotate finished pieces of code, marking up sections that would have required knowledge of other pieces of code or specific documentation to write. Cosine then takes all that information and generates a large synthetic data set that maps the typical steps coders take, and the sources of information they draw on, to finished pieces of code. They use this data set to train a model to figure out what breadcrumb trail it might need to follow to produce a particular program, and then how to follow it. Poolside, based in San Francisco, is also creating a synthetic data set that captures the process of coding, but it leans more on a technique called RLCEreinforcement learning from code execution. (Cosine uses this too, but to a lesser degree.) RLCE is analogous to the technique used to make chatbots like ChatGPT slick conversationalists, known as RLHFreinforcement learning from human feedback. With RLHF, a model is trained to produce text thats more like the kind human testers say they favor. With RLCE, a model is trained to produce code thats more like the kind that does what it is supposed to do when it is run (or executed). Gaming the system Cosine and Poolside both say they are inspired by the approach DeepMind took with its game-playing model AlphaZero. AlphaZero was given the steps it could takethe moves in a gameand then left to play against itself over and over again, figuring out via trial and error what sequence of moves were winning moves and which were not. They let it explore moves at every possible turn, simulate as many games as you can throw compute atthat led all the way to beating Lee Sedol, says Pengming Wang, a founding scientist at Poolside, referring to the Korean Go grandmaster that AlphaZero beat in 2016. Before Poolside, Wang worked at Google DeepMind on applications of AlphaZero beyond board games, including FunSearch, a version trained to solve advanced math problems. When that AlphaZero approach is applied to coding, the steps involved in producing a piece of codethe breadcrumbsbecome the available moves in a game, and a correct program becomes winning that game. Left to play by itself, a model can improve far faster than a human could. A human coder tries and fails one failure at a time, says Kant. Models can try things 100 times at once. A key difference between Cosine and Poolside is that Cosine is using a custom version of GPT-4o provided by OpenAI, which makes it possible to train on a larger data set than the base model can cope with, but Poolside is building its own large language model from scratch. Poolsides Kant thinks that training a model on code from the start will give better results than adapting an existing model that has sucked up not only billions of pieces of code but most of the internet. Im perfectly fine with our model forgetting about butterfly anatomy, he says. Cosine claims that its generative coding assistant, called Genie, tops the leaderboard on SWE-Bench, a standard set of tests for coding models. Poolside is still building its model but claims that what it has so far already matches the performance of GitHubs Copilot. I personally have a very strong belief that large language models will get us all the way to being as capable as a software developer, says Kant. Not everyone takes that view, however. Illogical LLMs To Justin Gottschlich, the CEO and founder of Merly, large language models are the wrong tool for the jobperiod. He invokes his dog: No amount of training for my dog will ever get him to be able to code, it just won't happen, he says. He can do all kinds of other things, but hes just incapable of that deep level of cognition. Having worked on code generation for more than a decade, Gottschlich has a similar sticking point with large language models. Programming requires the ability to work through logical puzzles with unwavering precision. No matter how well large language models may learn to mimic what human programmers do, at their core they are still essentially statistical slot machines, he says: I cant train an illogical system to become logical. Instead of training a large language model to generate code by feeding it lots of examples, Merly does not show its system human-written code at all. Thats because to really build a model that can generate code, Gottschlich argues, you need to work at the level of the underlying logic that code represents, not the code itself. Merlys system is therefore trained on an intermediate representationsomething like the machine-readable notation that most programming languages get translated into before they are run. Gottschlich wont say exactly what this looks like or how the process works. But he throws out an analogy: Theres this idea in mathematics that the only numbers that have to exist are prime numbers, because you can calculate all other numbers using just the primes. Take that concept and apply it to code, he says. Not only does this approach get straight to the logic of programming; its also fast, because millions of lines of code are reduced to a few thousand lines of intermediate language before the system analyzes them. Shifting mindsets What you think of these rival approaches may depend on what you want generative coding assistants to be. In November, Cosine banned its engineers from using tools other than its own products. It is now seeing the impact of Genie on its own engineers, who often find themselves watching the tool as it comes up with code for them. You now give the model the outcome you would like, and it goes ahead and worries about the implementation for you, says Yang Li, another Cosine cofounder. Pullen admits that it can be baffling, requiring a switch of mindset. We have engineers doing multiple tasks at once, flitting between windows, he says. While Genie is running code in one, they might be prompting it to do something else in another. These tools also make it possible to protype multiple versions of a system at once. Say youre developing software that needs a payment system built in. You can get a coding assistant to simultaneously try out several different optionsStripe, Mango, Checkoutinstead of having to code them by hand one at a time. Genie can be left to fix bugs around the clock. Most software teams use bug-reporting tools that let people upload descriptions of errors they have encountered. Genie can read these descriptions and come up with fixes. Then a human just needs to review them before updating the code base. No single human understands the trillions of lines of code in todays biggest software systems, says Li, and as more and more software gets written by other software, the amount of code will only get bigger. This will make coding assistants that maintain that code for us essential. The bottleneck will become how fast humans can review the machine-generated code, says Li. How do Cosines engineers feel about all this? According to Pullen, at least, just fine. If I give you a hard problem, youre still going to think about how you want to describe that problem to the model, he says. Instead of writing the code, you have to write it in natural language. But theres still a lot of thinking that goes into that, so youre not really taking the joy of engineering away. The itch is still scratched. Some may adapt faster than others. Cosine likes to invite potential hires to spend a few days coding with its team. A couple of months ago it asked one such candidate to build a widget that would let employees share cool bits of software they were working on to social media. The task wasnt straightforward, requiring working knowledge of multiple sections of Cosines millions of lines of code. But the candidate got it done in a matter of hours. This person who had never seen our code base turned up on Monday and by Tuesday afternoon hed shipped something, says Li. We thought it would take him all week. (They hired him.) But theres another angle too. Many companies will use this technology to cut down on the number of programmers they hire. Li thinks we will soon see tiers of software engineers. At one end there will be elite developers with million-dollar salaries who can diagnose problems when the AI goes wrong. At the other end, smaller teams of 10 to 20 people will do a job that once required hundreds of coders. It will be like how ATMs transformed banking, says Li. Anything you want to do will be determined by compute and not head count, he says. I think its generally accepted that the era of adding another few thousand engineers to your organization is over. Warp drives Indeed, for Gottschlich, machines that can code better than humans are going to be essential. For him, thats the only way we will build the vast, complex software systems that he thinks we will eventually need. Like many in Silicon Valley, he anticipates a future in which humans move to other planets. Thats only going to be possible if we get AI to build the software required, he says: Merlys real goal is to get us to Mars. Gottschlich prefers to talk about machine programming rather than coding assistants, because he thinks that term frames the problem the wrong way. I dont think that these systems should be assisting humansI think humans should be assisting them, he says. They can move at the speed of AI. Why restrict their potential? Theres this cartoon called The Flintstones where they have these cars, but they only move when the drivers use their feet, says Gottschlich. This is sort of how I feel most people are doing AI for software systems. But what Merlys building is, essentially, spaceships, he adds. Hes not joking. And I dont think spaceships should be powered by humans on a bicycle. Spaceships should be powered by a warp engine. If that sounds wildit is. But theres a serious point to be made about what the people building this technology think the end goal really is. Gottschlich is not an outlier with his galaxy-brained take. Despite their focus on products that developers will want to use today, most of these companies have their sights on a far bigger payoff. Visit Cosines website and the company introduces itself as a Human Reasoning Lab. It sees coding as just the first step toward a more general-purpose model that can mimic human problem-solving in a number of domains. Poolside has similar goals: The company states upfront that it is building AGI. Code is a way of formalizing reasoning, says Kant. Wang invokes agents. Imagine a system that can spin up its own software to do any task on the fly, he says. If you get to a point where your agent can really solve any computational task that you want through the means of softwarethat is a display of AGI, essentially. Down here on Earth, such systems may remain a pipe dream. And yet software engineering is changing faster than many at the cutting edge expected. Were not at a point where everythings just done by machines, but were definitely stepping away from the usual role of a software engineer, says Cosines Pullen. Were seeing the sparks of that new workflowwhat it means to be a software engineer going into the future.
    0 Comments ·0 Shares ·54 Views
  • The Download: AIs coding promises, and OpenAIs longevity push
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. The second wave of AI coding is here Ask people building generative AI what generative AI is good for right nowwhat theyre really fired up aboutand many will tell you: coding. Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But theres more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights.Read the full story.Will Douglas Heaven OpenAI has created an AI model for longevity science When you think of AIs contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says its getting into the science game toowith a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cellsand that it has handily beat humans at the task. The work represents OpenAIs first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. But until outside scientists get their hands on it, we cant say just how impressive it really is. Read the full story. Antonio Regalado Cleaner jet fuel: 10 Breakthrough Technologies 2025 New fuels made from used cooking oil, industrial waste, or even gases in the air could help power planes without fossil fuels. Depending on the source, they can reduce emissions by half or nearly eliminate them. And they can generally be used in existing planes, which could enable quick climate progress. These alternative jet fuels have been in development for years, but now theyre becoming a big business, with factories springing up to produce them and new government mandates requiring their use. So while only about 0.5% of the roughly 100 billion gallons of jet fuel consumed by planes last year was something other than fossil fuel, that could soon change. Read the full story.Casey Crownhart Cleaner jet fuel is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 TikTok is back online in the US The company thanked Donald Trump for vowing to fight the federal ban its facing. (The Verge)+ The app went dark for users in America for around 14 hours. (WP $)+ AI search startup Perplexity has suggested merging with TikTok. (CNBC)+ Heres how people actually make money on TikTok. (WSJ $)2 Trumps staff has an Elon Musk problem Aides are annoyed by his constant contributions to matters he has little knowledge of. (WSJ $)+ A power struggle between the two men is inevitable. (Slate $)+ The great and the good of crypto attended a VIP Trump party on Friday. (NY Mag $)3 AI is speeding up the Pentagons kill list Although the US military cant use the tech to directly kill humans, AI is making it faster and easier to plan how to do just that. (TechCrunch)+ OpenAIs new defense contract completes its military pivot. (MIT Technology Review)4 The majority of Americans havent had their latest covid booster Though they could help to protect youand others. (Undark)+ Its five years today since the US registered its first covid case. (USA Today)5 Europol is cracking down on encryption The agency plans to pressure Big Tech to give police access to encrypted messages. (FT $)6 This Swiss startup has created a powerful robotic wormBorobotics wants to deploy the bots to dig for geo-thermal heat in our gardens. (The Next Web) 7 Thousands of lithium batteries were destroyed in a massive fireThe worlds largest battery storage plant went up in flames in California. (New Scientist $) + Three takeaways about the current state of batteries. (MIT Technology Review)8 Amazons delivery drones struggle in the rain Bloomberg $) 9 A Ring doorbell captured a meteorite crashing to Earth Its the first known example of a meteorite fall documented by a doorbell cam. (CBS News)10 AI is coming for your wardrobe A wave of new apps will suggest what to wear and what to pair it with. (The Guardian)Quote of the day "TikTok was 100x better than anything you've created. An Instagram user snaps at Facebook founder Mark Zuckerberg in the wake of TikToks temporary US blackout over the weekend. The big story Running Tide is facing scientist departures and growing concerns over seaweed sinking for carbon removal June 2022 Running Tide, an aquaculture company based in Portland, Maine, hopes to set tens of thousands of tiny floating kelp farms adrift in the North Atlantic. The idea is that the fast-growing macroalgae will eventually sink to the ocean floor, storing away thousands of tons of carbon dioxide in the process. The company has raised millions in venture funding and gained widespread media attention. But it struggled to grow kelp along rope lines in the open ocean during initial attempts last year and has lost a string of scientists in recent months, sources with knowledge of the matter tell MIT Technology Review. What happens next? Read the full story. James Temple We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Why not cheer up your Monday with the kings of merriment, The Smiths?+ This is fascinating: how fish detect color and why its so different to us humans.+ The people of Finland know a thing or two about happiness.+ Its time to get planning a spring getaway, and these destinations look just fabulous.
    0 Comments ·0 Shares ·60 Views
  • Deciding the fate of leftover embryos
    www.technologyreview.com
    This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here. Over the past few months, Ive been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a persons uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesnt work. But often it does. For the article, I explored what happens to the healthy embryos that are left over. I spoke to Lisa Holligan, who had IVF in the UK around five years ago. Holligan donated her genetically abnormal embryos for scientific research. But she still has one healthy embryo frozen in storage. And she doesnt know what to do with it. Shes not the only one struggling with the decision. Leftover embryos are kept frozen in storage tanks, where they sit in little straws, invisible to the naked eye, their growth paused in a state of suspended animation. What happens next is down to personal choicebut that choice can be limited by a complex web of laws and ethical and social factors. These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. Intended parents will sign a form indicating what they would like to happen to those embryos. Typically, that means deciding early on whether they might like any embryos they dont end up using to be destroyed or donated, either to someone else trying to conceive or for research. But it can be really difficult to make these decisions before youve even started treatment. People seeking fertility treatment will usually have spent a long time trying to get pregnant. They are hoping for healthy embryos, and some cant imagine having any left overor how they might feel about them. For a lot of people, embryos are not just balls of cells. They hold the potential for life, after all. Some people see them as children, waiting to be born. Some even name their embryos, or call them their freezer babies. Others see them as the product of a long, exhausting, and expensive IVF journey. Holligan says that she initially considered donating her embryo to another person, but her husband disagreed. He saw the embryo as their child and said he wouldnt feel comfortable with giving it up to another family. I started having these thoughts about a child coming to me when theyre older, saying theyve had a terrible life, and [asking] Why didnt you have me? she told me. Holligan lives in the UK, where you can store your embryos for up to 55 years. Destroying or donating them are also options. Thats not the case in other countries. In Italy, for example, embryos cannot be destroyed or donated. Any that are frozen will remain that way forever, unless the law changes at some point. In the US, regulations vary by state. The patchwork of laws means that one state can bestow a legal status on embryos, giving them the same rights as children, while another might have no legislation in place at all. No one knows for sure how many embryos are frozen in storage tanks, but the figure is thought to be somewhere between 1 million and 10 million in the US alone. Some of these embryos have been in storage for years or decades. In some cases, the intended parents have deliberately chosen this, opting to pay hundreds of dollars per year in fees. But in other cases, clinics have lost touch with their clients. Many of these former clients have stopped paying for the storage of their embryos, but without up-to-date consent forms, clinics can be reluctant to destroy them. What if the person comes back and wants to use those embryos after all? Most clinics, if they have any hesitation or doubt or question, will err on the side of holding on to those embryos and not discarding them, says Sigal Klipstein, a reproductive endocrinologist at InVia Fertility Center in Chicago, who also chairs the ethics committee of the American Society for Reproductive Medicine. Because its kind of like a one-way ticket. Klipstein thinks one of the reasons why some embryos end up abandoned in storage is that the people who created them cant bring themselves to destroy them. Its just very emotionally difficult for someone who has wanted so much to have a family, she tells me. Klipstein says she regularly talks to her patients about what to do with leftover embryos. Even people who make the decision with confidence can change their minds, she says. Weve all had those patients who have discarded embryos and then come back six months or a year later and said: Oh, I wish I had those embryos, she tells me. Those [embryos may have been] their best chance of pregnancy. Those who do want to discard their embryos have options. Often, the embryos will simply be exposed to air and then disposed of. But some clinics will also offer to transfer them at a time or place where a pregnancy is extremely unlikely to result. This compassionate transfer, as it is known, might be viewed as a more natural way to dispose of the embryo. But its not for everyone. Holligan has experienced multiple miscarriages and wonders if a compassionate transfer might feel similar. She wonders if it might just end up putting [her] body and mind through unnecessary stress. Ultimately, for Holligan and many others in a similar position, the choice remains a difficult one. These are very desired embryos, says Klipstein. The purpose of going through IVF was to create embryos to make babies. And [when people] have these embryos, and theyve completed their family plan, theyre in a place they couldnt have imagined. Now read the rest of The Checkup Read more from MIT Technology Review's archive Our relationship with embryos is unique, and a bit all over the place. Thats partly because we cant agree on their moral status. Are they more akin to people or property, or something in between? Who should get to decide their fate? While we get to the bottom of these sticky questions, millions of embryos are stuck in suspended animationsome of them indefinitely. It is estimated that over 12 million babies have been born through IVF. The development of the Nobel Prizewinning technology behind the procedure relied on embryo research. Some worry that donating embryos for research can be onerousand that valuable embryos are being wasted as a result. Fertility rates around the world are dropping below the levels needed to maintain stable populations. But IVF cant save us from a looming fertility crisis. Gender equality and family-friendly policies are much more likely to prove helpful. Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states. But in November of last year, some states voted to extend and protect access to abortion, and voters in Missouri supported overturning the state's ban. Last year, a ruling by the Alabama Supreme Court that embryos count as children ignited fears over access to fertility treatments in a state that had already banned abortion. The move could also have implications for the development of technologies like artificial uteruses and synthetic embryos, my colleague Antonio Regalado wrote at the time. From around the web Its not just embryos that are frozen as part of fertility treatments. Eggs, sperm, and even ovarian and testicular tissue can be stored too. A man who had immature testicular tissue removed and frozen before undergoing chemotherapy as a child 16 years ago had the tissue reimplanted in a world first, according to the team at University Hospital Brussels that performed the procedure around a month ago. The tissue was placed into the mans testicle and scrotum, and scientists will wait a year before testing to see if he is successfully producing sperm. (UZ Brussel) The Danish pharmaceutical company Novo Nordisk makes half the worlds insulin. Now it is better known as the manufacturer of the semaglutide drug Ozempic. How will the sudden shift affect the production and distribution of these medicines around the world? (Wired) The US has not done enough to prevent the spread of the H5N1 virus in dairy cattle. The response to bird flu is a national embarrassment, argues Katherine J. Wu. (The Atlantic) Elon Musk has said that if all goes well, millions of people will have brain-computer devices created by his company Neuralink implanted within 10 years. In reality, progress is slowerso far, Musk has said that three people have received the devices. My colleague Antonio Regalado predicts what we can expect from Neuralink in 2025. (MIT Technology Review)
    0 Comments ·0 Shares ·40 Views
  • OpenAI has created an AI model for longevity science
    www.technologyreview.com
    When you think of AIs contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says its getting into the science game toowith a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cellsand that it has handily beat humans at the task. The work represents OpenAIs first model focused on biological data and its first public claim that its models can deliver unexpected scientific results. As such, it is a step toward determining whether or not AI can make true discoveries, which some argue is a major test on the pathway to artificial general intelligence. Last week, OpenAI CEO Sam Altman said he was confident his company knows how to build an AGI, adding that superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own. The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. Its a phenomenon that researchers at Retro, and at richly funded companies like Altos Labs, see as the possible starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells. But such cell reprogramming is not very efficient. It takes several weeks, and less than 1% of cells treated in a lab dish will complete the rejuvenation journey. OpenAIs new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the models suggestions to change two of the Yamanaka factors to to be more than 50 times as effectiveat least according to some preliminary measures. Just across the board, the proteins seem better than what the scientists were able to produce by themselves, says John Hallman, an OpenAI researcher. Hallman and OpenAIs Aaron Jaech, as well as Rico Meinl from Retro, were the models lead developers. Outside scientists wont be able to tell if the results are real until theyre published, something the companies say they are planning. Nor is the model available for wider useits still a bespoke demonstration, not an official product launch. This project is meant to show that were serious about contributing to science, says Jaech. But whether those capabilities will come out to the world as a separate model or whether theyll be rolled into our mainline reasoning modelsthats still to be determined. The model does not work the same way as Googles AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While thats a lot of data, its just a fraction of what OpenAIs flagship chatbots were trained on, making GPT-4b an example of a small language model that works with a focused data set. Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the few-shot method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since theyre built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAIs model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. OPENAI We threw this model into the lab immediately and we got real-world results, says Retros CEO, Joe Betts-Lacroix. He says the models ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases. Vadim Gladyshev, a Harvard University aging researcher who consults with Retro, says better ways of making stem cells are needed. For us, it would be extremely useful. [Skin cells] are easy to reprogram, but other cells are not, he says. And to do it in a new speciesits often extremely different, and you dont get anything. How exactly the GPT-4b arrives at its guesses is still not clearas is often the case with AI models. Its like when AlphaGo crushed the best human at Go, but it took a long time to find out why, says Betts-Lacroix. We are still figuring out what it does, and we think the way we apply this is only scratching the surface. OpenAI says no money changed hands in the collaboration. But because the work could benefit Retrowhose biggest investor is Altmanthe announcement may add to questions swirling around the OpenAI CEOs side projects. Last year, the Wall Street Journal said Altmans wide-ranging investments in private tech startups amount to an opaque investment empire that is creating a mounting list of potential conflicts, since some of these companies also do business with OpenAI. In Retros case, simply being associated with Altman, OpenAI, and the race toward AGI could boost its profile and increase its ability to hire staff and raise funds. Betts-Lacroix did not answer questions about whether the early-stage company is currently in fundraising mode. OpenAI says Altman was not directly involved in the work and that it never makes decisions based on Altmans other investments.
    0 Comments ·0 Shares ·58 Views
  • The Download: how to save social media, and leftover embryos
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. We need to protect the protocol that runs Bluesky Eli Pariser & Deepti Doshi Last week, when Mark Zuckerberg announced Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. Its just the latest example of a billionaire flip-flop affecting our social lives on the internet. Zuckerberg isnt the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as the bedrock of a functioning democracy, has suspended journalists, restored tens of thousands of banned users, brought back political advertising, and weakened verification and harassment policies. Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns. The internet doesnt need to be like this. But as luck would have it, a new way is emerging just in time. Read the full story. Deciding the fate of leftover embryos Over the past few months, Ive been working on a piece about IVF embryos. The goal of in vitro fertilization is to create babies via a bit of lab work: Trigger the release of lots of eggs, introduce them to sperm in a lab, transfer one of the resulting embryos into a persons uterus, and cross your fingers for a healthy pregnancy. Sometimes it doesnt work. But often it does. For the article, I explored what happens to the healthy embryos that are left over. These days, responsible IVF clinics will always talk to people about the possibility of having leftover embryos before they begin treatment. But it can be really difficult to make these decisions before youve even started treatment, and some people cant imagine having any left overor how they might feel about them. Read the full story.Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. MIT Technology Review Narrated: Palmer Luckey on the Pentagons future of mixed reality Palmer Luckey, the founder of Oculus VR, has set his sights on a new mixed-reality headset customer: the Pentagon. If designed well, his company Andurils headset will automatically sort through countless pieces of information and flag the most important ones to soldiers in real time. But thats a big if. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which were publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released.The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 The Biden administration wont force through a TikTok ban But TikTok could choose to shut itself down on Sunday to prove a point. (ABC News)+ A Supreme Court decision is expected later today. (NYT $)+ Every platform has a touch of TikTok about it these days. (The Atlantic $) 2 Apple is pausing its AI news feature Because it cant be trusted to meld news stories together without hallucinating. (BBC)+ The company is working on a fix to roll out in a future software update. (WP $)3 Meta is preparing for Donald Trumps mass deportations By relaxing speech policies around immigration, Meta is poised to shape public opinion towards accepting Trumps plans to tear families apart. (404 Media)4 An uncrewed SpaceX rocket exploded during a test flight Elon Musk says it was probably caused by a leak. (WSJ $)5 The FBI believes that hackers accessed its agents call logs The data could link investigators to their secret sources. (Bloomberg $)6 What its like fighting fire with waterDumping water on LAs wildfires may be inelegant, but it is effective. (NY Mag $) + How investigators are attempting to trace the fires origins. (BBC)7 The road to adapting Teslas charges for other EVs is far from smooth But it is happening, slowly but surely. (IEEE Spectrum)+ Donald Trump isnt a fan of EVs, but the market is undoubtedly growing. (Vox)+ Why EV charging needs more than Tesla. (MIT Technology Review)8 Bionic hands are getting far more sensitive FT $) + These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)9 Gen Z cant get enough of astrology apps Stargazing is firmly back e\in vogue among the younger generations. (Economist $) 10 Nintendo has finally unveiled its long-awaited Switch 2 console Only for it to look a whole lot like its predecessor. (WSJ $)+ But itll probably sell a shedload of units anyway. (Wired $)Quote of the day Going viral is like winning the lotterynearly impossible to replicate. Sarah Schauer, a former star on defunct video app Vine, offers creators left nervous by TikToks uncertain future in the US some advice, the Washington Post reports. The big story After 25 years of hype, embryonic stem cells are still waiting for their moment August 2023 In 1998, researchers isolated powerful stem cells from human embryos. It was a breakthrough, since these cells are the starting point for human bodies and have the capacity to turn into any other type of cellheart cells, neurons, you name it. National Geographic would later summarize the incredible promise: "the dream is to launch a medical revolution in which ailing organs and tissues might be repaired with living replacements. It was the dawn of a new era. A holy grail. Pick your favorite clichthey all got airtime. Yet today, more than two decades later, there are no treatments on the market based on these cells. Not one. Our biotech editor Antonio Regalado set out to investigate why, and when that might change. Heres what he discovered. We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + If you're planning on catching up with a friend this weekendstop! You should be hanging out instead.+ David Lynch was a true visionary; an innovative artist and master of the truly weird. The world is a duller place without him.+ The very best instant noodles, ranked ($)+ Congratulations to the highly exclusive Cambridge University Tiddlywinks Club, which is celebrating its 70th anniversary.
    0 Comments ·0 Shares ·48 Views
  • We need to protect the protocol that runs Bluesky
    www.technologyreview.com
    Last week, when Mark Zuckerberg announced Meta would be ending third-party fact-checking, it was a shocking pivot, but not exactly surprising. Its just the latest example of a billionaire flip-flop affecting our social lives on the internet. After January 6th, Zuckerberg bragged to Congress about Facebooks industry-leading fact-checking program and banned President Trump from the platform. But just two years later, he welcomed Trump back. And last year Zuckerberg was privately reassuring conservative rep Jim Jordan that Meta will no longer demote questionable content while its being fact-checked. Now, not only is Meta ending fact-checking completely, it is loosening rules around hate speech, allowing horrendous personal attacks on migrants or trans people, for example, on its platforms. And Zuckerberg isnt the only social media CEO careening all over the road: Elon Musk, since buying Twitter in 2022 and touting free speech as the bedrock of a functioning democracy, has suspended journalists, restored tens of thousands of banned users (including White Nationalists), brought back political advertising, and weakened verification and harassment policies. Unfortunately, these capricious billionaires can do whatever they want because of an ownership model that privileges singular, centralized control in exchange for shareholder returns. And this has led to a constantly shifting, opaque digital environment in which people can lose their communication pathways and livelihoods in a second, with no recourse as the rules shift. The internet doesnt need to be like this. But as luck would have it, a new way is emerging just in time. If youve heard of Bluesky, youve probably heard of it as a clone of Twitter where liberals can take refuge. But under the hood its structured fundamentally differently in a way that could point us to a healthier internet for everyone, regardless of politics or identity. Just like email, Bluesky sits on top of an open protocol. In practice, that means that anyone can build on it. Just like you wouldnt need anyones permission to start a newsletter company built on email, people are starting to share remixed versions of their social media feed, built on Bluesky. This sounds like a small thing, but think about all the harms done by social media companies through their algorithms in the last decade: insurrection, radicalization, self-harm, bullying. Similarly, Bluesky enables users to share blocklists and labels, to collaborate on verification and moderation. Letting people shape their own experience of social media is nothing short of revolutionary. And importantly, if you decide that you dont agree with Blueskys design and moderation decisions, you can build something else on the same infrastructure and use that instead. This is fundamentally different from the dominant, centralized social media that has come before. At the core of Blueskys philosophy is the idea that instead of being centralized in the hands of one person or institution, social media governance should obey the principle of subsidiarity. Nobel Prize-winning economist Elinor Ostrom found, through studying grassroots solutions to local environmental problems around the world, that some problems are best solved locally, while others are best solved at a higher level. In terms of content moderation, posts related to CSAM or terrorism are best handled by professionals keeping millions or billions safe. But a lot of decisions about speech can be solved in each community, or even user by user by assembling a Bluesky blocklist. So all the right elements are currently in place at Bluesky to usher in this new architecture for social media: independent ownership, newfound popularity, a stark contrast with other dominant platforms, and right-minded leadership. But challenges remain, and we cant count on Bluesky doing this right without support. Critics have pointed out that Bluesky has yet to turn a profit and is currently running on venture capital, the same corporate structure that brought us Facebook, Twitter, and other social media companies. As of now, theres no option to exit Bluesky and take your data and network with you, because there are no other servers that run the AT Protocol. Bluesky CEO Jay Graber deserves credit for her stewardship so far, and for attempting to avoid the dangers of advertising incentives. But the process of capitalism degrading tech products is so predictable that Cory Doctorow coined a now-popular term for it: enshittification. Thats why we need to act now to secure the foundation of this digital future and make it enshittification-proof.Free Our Feeds. There are three parts: First, Free Our Feeds wants to create a nonprofit foundation to govern and protect the AT Protocol, outside of Bluesky the company. We also need to build redundant servers so anyone can leave with their data or build anything they wantregardless of policies set by Bluesky. Finally, we need to spur the development of a whole ecosystem built on this tech with seed money and expertise. Its worth noting that this is not a hostile takeover: Bluesky and Graber recognize the importance of this effort and have signaled their approval. But the point is, this effort cant rely on them. To free us from fickle billionaires, some of the power has to reside outside Bluesky Inc. If we get this right, so much is possible. Not too long ago, the internet was full of builders and people working together: the open web. Email. Podcasts. Wikipedia is one of the best examples a collaborative project to create one of the webs best free, public resources. And the reason we still have it today is the infrastructure built up around it: the nonprofit Wikimedia Foundation protects the project and insulates it from the pressures of capitalism. Whens the last time we collectively built anything as good? We can shift the balance of power and reclaim our social lives from these companies and their billionaires. This an opportunity to bring much more independence, innovation, and local control to our online conversations. We can finally build the Wikipedia of social media, or whatever we want. But we need to act, because the future of the internet cant depend on whether one of the richest men on earth wakes up on the wrong side of the bed. Eli Pariser is author of The Filter Bubble and co-director of New_ Public, a nonprofit R&D lab thats working to reimagine social media. Deepti Doshi is a co-director of New_ Public and was a director at Meta.
    0 Comments ·0 Shares ·46 Views
  • What to expect from Neuralink in 2025
    www.technologyreview.com
    MIT Technology Reviews Whats Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of themhere. In November, a young man named Noland Arbaugh announced hed be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom. The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musks brain-interface company. The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaughs livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore peoples daily ability to roam the web and play games, giving them what the company has called digital freedom. But this is not yet a commercial product. The current studies are small-scalethey are true experiments, explorations of how the device works and how it can be improved. For instance, at some point last year, more than half the electrode-studded threads inserted into Aurbaughs brain retracted, and his control over the device worsened; Neuralink rushed to implement fixes so he could use his remaining electrodes to move the mouse. Neuralink did not reply to emails seeking comment, but here is what our analysis of its public statements leads us to expect from the company in 2025. More patients How many people will get these implants?he posted on X: If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within five years, millions within 10 years. In reality, the actual pace is slowera lot slower. Thats because in a study of a novel device, its typical for the first patients to be staged months apart, to allow time to monitor for problems. Neuralink has publicly announced that two people have received an implant: Arbaugh and a man referred to only as Alex, who received his in July or August. Then, on January 8, Musk disclosed during an online interview that there was now a third person with an implant. Weve got now three patients, three humans with Neuralinks implanted, and they are all working well, Musk said. During 2025, he added, we expect to hopefully do, I dont know, 20 or 30 patients. Barring major setbacks, expect the pace of implants to increasealthough perhaps not as fast as Musk says. In November, Neuralink updated its US trial listing to include space for five volunteers (up from three), and it also opened a trial in Canada with room for six. Considering these two studies only, Neuralink would carry out at least two more implants by the end of 2025 and eight by the end of 2026. However, by opening further international studies, Neuralink could increase the pace of the experiments. Better control So how good is Arbaughs control over the mouse? You can get an idea by trying a game called Webgrid, where you try to click quickly on a moving target. The program translates your speed into a measure of information transfer: bits per second. Neuralink claims Arbaugh reached a rate of over nine bits per second, doubling the old brain-interface record. The median able-bodied user scores around 10 bits per second, according to Neuralink. And yet during his livestream, Arbaugh complained that his mouse control wasnt very good because his model was out of date. It was a reference to how his imagined physical movements get mapped to mouse movements. That mapping degrades over hours and days, and to recalibrate it, he has said, he spends as long as 45 minutes doing a set of retraining tasks on his monitor, such as imagining moving a dot from a center point to the edge of a circle. Noland Arbaugh stops to calibrate during a livestream on X@MODDEDQUAD VIA X Improving the software that sits between Arbaughs brain and the mouse is a big area of focus for Neuralinkone where the company is still experimenting and making significant changes. Among the goals: cutting the recalibration time to a few minutes. We want them to feel like they are in the F1 [Formula One] car, not the minivan, Bliss Chapman, who leads the BCI software team, told the podcaster Lex Fridman last year. Device changes Before Neuralink ever seeks approval to sell its brain interface, it will have to lock in a final device design that can be tested in a pivotal trial involving perhaps 20 to 40 patients, to show it really works as intended. That type of study could itself take a year or two to carry out and hasnt yet been announced. In fact, Neuralink is still tweaking its implant in significant waysfor instance, by trying to increase the number of electrodes or extend the battery life. This month, Musk said the next human tests would be using an upgraded Neuralink device. The company is also still developing the surgical robot, called R1, thats used to implant the device. It functions like a sewing machine: A surgeon uses R1 to thread the electrode wires into peoples brains. According to Neuralinks job listings, improving the R1 robot and making the implant process entirely automatic is a major goal of the company. Thats partly to meet Musks predictions of a future where millions of people have an implant, since there wouldnt be enough neurosurgeons in the world to put them all in manually. We want to get to the point where its one click, Neuralink president Dongjin Seo told Fridman last year. Robot arm Late last year, Neuralink opened a companion study through which it says some of its existing implant volunteers will get to try using their brain activity to control not only a computer mouse but other types of external devices, including an assistive robotic arm. We havent yet seen what Neuralinks robotic arm looks likewhether its a tabletop research device or something that could be attached to a wheelchair and used at home to complete daily tasks. But its clear such a device could be helpful. During Aurbaughs livestream he frequently asked other people to do simple things for him, like brush his hair or put on his hat. Arbaugh demonstrates the use of Imagined Movement Control.@MODDEDQUAD VIA X And using brains to control robots is definitely possiblealthough so far only in a controlled research setting. In tests using a different brain implant, carried out at the University of Pittsburgh in 2012, a paralyzed woman named Jan Scheuermann was able to use a robot arm to stack blocks and plastic cups about as well as a person whod had a severe strokeimpressive, since she couldnt actually move her own limbs. There are several practical obstacles to using a robot arm at home. One is developing a robot thats safe and useful. Another, as noted by Wired, is that the calibration steps to maintain control over an arm that can make 3D movements and grasp objects could be onerous and time consuming. Vision implant In September, Neuralink said it had received breakthrough device designation from the FDA for a version of its implant that could be used to restore limited vision to blind people. The system, which it calls Blindsight, would work by sending electrical impulses directly into a volunteers visual cortex, producing spots of light called phosphenes. If there are enough spots, they can be organized into a simple, pixelated form of vision, as previously demonstrated by academic researchers. The FDA designation is not the same as permission to start the vision study. Instead, its a promise by the agency to speed up review steps, including agreements around what a trial should look like. Right now, its impossible to guess when a Neuralink vision trial could start, but it wont necessarily be this year. More money Neuralink last raised money in 2003, collecting around $325 million from investors in a funding round that valued the company at over $3 billion, according to Pitchbook. Ryan Tanaka, who publishes a podcast about the company, Neura Pod, says he thinks Neuralink will raise more money this year and that the valuation of the private company could triple. Fighting regulators Neuralink has attracted plenty of scrutiny from news reporters, animal-rights campaigners, and even fraud investigators at the Securities and Exchange Commission. Many of the questions surround its treatment of test animals and whether it rushed to try the implant in people. More recently, Musk has started using his X platform to badger and bully heads of state and was named by Donald Trump to co-lead a so-called Department of Government Efficiency, which Musk says will get rid of nonsensical regulations and potentially gut some DC agencies. During 2025, watch for whether Musk uses his digital bullhorn to give health regulators pointed feedback on how theyre handling Neuralink. Other efforts Dont forget that Neuralink isnt the only company working on brain implants. A company called Synchron has one thats inserted into the brain through a blood vessel, which its also testing in human trials of brain control over computers. Other companies, including Paradromics, Precision Neuroscience, and BlackRock Neurotech, are also developing advanced brain-computer interfaces. Special thanks to Ryan Tanaka of Neura Pod for pointing us to Neuralinks public announcements and projections.
    0 Comments ·0 Shares ·81 Views
  • The Download: whats next for Neuralink, and Metas language translation AI
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. What to expect from Neuralink in 2025 In November, a young man named Noland Arbaugh announced hed be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom. The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musks brain-interface company. Arbaughs livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore peoples daily ability to roam the web and play games, giving them what the company has called digital freedom. But this is not yet a commercial product. The current studies are small-scalethey are true experiments, explorations of how the device works and how it can be improved. Read on for our analysis of what to expect from the company in 2025. Antonio Regalado Metas new AI model can translate speech from more than 100 languages Whats new: Meta has released a new AI model that can translate speech from 101 different languages. It represents a step toward real-time, simultaneous interpretation, where words are translated as soon as they come out of someones mouth. Why it matters: Typically, translation models for speech use a multistep approach which can be inefficient, and at each step, errors and mistranslations can creep in. But Metas new model, called SeamlessM4T, enables more direct translation from speech in one language to speech in another. Read the full story. Scott J Mulligan Interest in nuclear power is surging. Is it enough to build new reactors? Lately, the vibes have been good for nuclear power. Public support is building, and public and private funding have made the technology more economical in key markets. Theres also a swell of interest from major companies looking to power their data centers. These shifts have been great for existing nuclear plants. Were seeing efforts to boost their power output, extend the lifetime of old reactors, and even reopen facilities that have shut down. Thats good news for climate action, because nuclear power plants produce consistent electricity with very low greenhouse-gas emissions. I covered all these trends in my latest story, which digs into whats next for nuclear power in 2025 and beyond. But as I spoke with experts, one central question kept coming up for me: Will all of this be enough to actually get new reactors built?Casey Crownhart This article is from The Spark, MIT Technology Reviews weekly climate and energy newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Donald Trump is exploring how to save TikTok An executive order could suspend its ban or sale by up to 90 days. (WP $)+ But questions remain over the legality of such a move. (Axios)+ YouTuber MrBeast has said hes interested in buying the app. (Insider $)+ The depressing truth about TikToks impending ban. (MIT Technology Review)2 Blue Origins New Glenn rocket has made it into space But it lost a booster along the way. (The Verge)3 Angelenos are naming and shaming landlords for illegal price gouging A grassroots Google Sheet is tracking rentals with significant price increases among the wild fires. (Fast Company $)4 How the Trump administration will shake up defense tech Its likely to favor newer players over established firms for lucrative contracts. (FT $)+ Weapons startup Anduril plans to build a $1 billion factory in Ohio. (Axios)+ Palmer Luckey on the Pentagons future of mixed reality. (MIT Technology Review)5 The difference between mistakes made by humans and AIMachines errors are a whole lot weirder, for a start. (IEEE Spectrum) + A new public database lists all the ways AI could go wrong. (MIT Technology Review) 6 The creator economy is bouncing backFunding for creator startups is rising, after two years in the doldrums. (The Information $) 7 Predicting the future of tech is notoriously tough But asking better initial questions is a good place to start. (WSJ $)8 IVF isnt just for combating fertility problems any moreIts becoming a tool for genetic screening before a baby is even born. (The Atlantic $) + Three-parent baby technique could create babies at risk of severe disease. (MIT Technology Review)9 The killer caterpillars could pave the way to better medicine Studying their toxic secretions could help create new drugs more quickly. (Knowable Magazine)10 How to document your life digitally If physical diaries arent for you, there are plenty of smartphone-based options. (NYT $)Quote of the day Americans may only be able to watch as their app rots." Joseph Lorenzo Hall, a technologist at the nonprofit Internet Society, tells Reuters how TikToks complicated network of service providers means that the app could fall apart gradually, rather than all at once, if the proposed US ban goes ahead. The big story How refrigeration ruined fresh food October 2024 Three-quarters of everything in the average American diet passes through the cold chainthe network of warehouses, shipping containers, trucks, display cases, and domestic fridges that keep meat, milk, and more chilled on the journey from farm to fork. As consumers, we put a lot of faith in terms like fresh and natural, but artificial refrigeration has created a blind spot. Weve gotten so good at preserving (and storing) food, that we know more about how to lengthen an apples life span than a humans, and most of us dont give that extraordinary process much thought at all. But all that convenience has come at the expense of diversity and deliciousness. Read the full story. Allison Arieff We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + The biggest and best tours of 2025 look really exciting (especially Oasis!)+ If you love classic mobile phones, you need to check out Aalto Universitys newly launched Nokia Design Archive immediately.+ The one and only Ridley Scott explains how a cigarette inspired that iconic hand-in-wheat shot in Gladiator.+ Set aside your reading goals for the yearyour only aim should be to read the books you really want to.
    0 Comments ·0 Shares ·53 Views
  • Interest in nuclear power is surging. Is it enough to build new reactors?
    www.technologyreview.com
    This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. Lately, the vibes have been good for nuclear power. Public support is building, and public and private funding have made the technology more economical in key markets. Theres also a swell of interest from major companies looking to power their data centers. These shifts have been great for existing nuclear plants. Were seeing efforts to boost their power output, extend the lifetime of old reactors, and even reopen facilities that have shut down. Thats good news for climate action, because nuclear power plants produce consistent electricity with very low greenhouse-gas emissions. I covered all these trends in my latest story, which digs into whats next for nuclear power in 2025 and beyond. But as I spoke with experts, one central question kept coming up for me: Will all of this be enough to actually get new reactors built? To zoom in on some of these trends, lets take a look at the US, which has the largest fleet of nuclear reactors in the world (and the oldest, with an average age of over 42 years). In recent years weve seen a steady improvement in public support for nuclear power in the US. Today, around 56% of Americans support more nuclear power, up from 43% in 2020, according to a Pew Research poll. The economic landscape has also shifted in favor of the technology. The Inflation Reduction Act of 2022 includes tax credits specifically for operating nuclear plants, aimed at keeping them online. Qualifying plants can receive up to $15 per megawatt-hour, provided they meet certain labor requirements. (For context, in 2021, its last full year of operation, Palisades in Michigan generated over 7 million megawatt-hours.) Big Tech has also provided an economic boost for the industrytech giants like Microsoft, Meta, Google, and Amazon are all making deals to get in on nuclear. These developments have made existing (or recently closed) nuclear power plants a hot commodity. Plants that might have been candidates for decommissioning just a few years ago are now candidates for license extension. Plants that have already shut down are seeing a potential second chance at life. Theres also the potential to milk more power out of existing facilities through changes called uprates, which basically allow existing facilities to produce more energy by tweaking existing instruments and power generation systems. The US Nuclear Regulatory Commission has approved uprates totaling six gigawatts over the past two decades. Thats a small but certainly significant fraction of the roughly 97 gigawatts of nuclear on the grid today. Any reactors kept online, reopened, or ramped up spell good news for emissions. Well probably also need new reactors just to maintain the current fleet, since so many reactors are scheduled to be retired in the next couple of decades. Will the enthusiasm for keeping old plants running also translate into building new ones? In much of the world (China being a notable exception), building new nuclear capacity has historically been expensive and slow. Its easy to point at Plant Vogtle in the US: The third and fourth reactors at that facility began construction in 2009. They were originally scheduled to start up in 2016 and 2017, at a cost of around $14 billion. They actually came online in 2023 and 2024, and the total cost of the project was north of $30 billion. Some advanced technology has promised to fix the problems in nuclear power. Small modular reactors could help cut cost and construction times, and next-generation reactors promise safety and efficiency improvements that could translate to cheaper, quicker construction. Realistically, though, getting these first-of-their-kind projects off the ground will still require a lot of money and a sustained commitment to making them happen. The next four years are make or break for advanced nuclear, says Jessica Lovering, cofounder at the Good Energy Collective, a policy research organization that advocates for the use of nuclear energy. There are a few factors that could help the progress weve seen recently in nuclear extend to new builds. For one, public support from the US Department of Energy includes not only tax credits but public loans and grants for demonstration projects, which can be a key stepping stone to commercial plants that generate electricity for the grid. Changes to the regulatory process could also help. The Advance Act, passed in 2024, aims at sprucing up the Nuclear Regulatory Commission (NRC) in the hopes of making the approval process more efficient (currently, it can take up to five years to complete). If you can see the NRC really start to modernize toward a more efficient, effective, and predictable regulator, it really helps the case for a lot of these commercial projects, because the NRC will no longer be seen as this barrier to innovation, says Patrick White, research director at the Nuclear Innovation Alliance, a nonprofit think tank. We should start to see changes from that legislation this year, though what happens could depend on the Trump administration. The next few years are crucial for next-generation nuclear technology, and how the industry fares between now and the end of the decade could be very telling when it comes to how big a role this technology plays in our longer-term efforts to decarbonize energy. Now read the rest of The Spark Related reading For more on whats next for nuclear power, check out my latest story. One key trend Im following is efforts to reopen shuttered nuclear plants. Heres how to do it. Kairos Power is working to build molten-salt-cooled reactors, and we named the company to our list of 10 Climate Tech Companies to watch in 2024. Another thing Devastating wildfires have been ravaging Southern California. Heres a roundup of some key stories about the blazes. Strong winds have continued this week, bringing with them the threat of new fires. Heres a page with live updates on the latest. (Washington Post) Officials are scouring the spot where the deadly Palisades fire started to better understand how it was sparked. (New York Times) Climate change didnt directly start the fires, but global warming did contribute to how intensely they burned and how quickly they spread. (Axios) The LA fires show that controlled burns arent a cure-all when it comes to preventing wildfires. (Heatmap News) Seawater is a last resort when it comes to fighting fires, since its corrosive and can harm the environment when dumped on a blaze. (Wall Street Journal) Keeping up with climate US emissions cuts stalled last year, despite strong growth in renewables. The cause: After staying flat or falling for two decades, electricity demand is rising. (New York Times) With Donald Trump set to take office in the US next week, many are looking to state governments as a potential seat of climate action. Heres what to look for in states including Texas, California, and Massachusetts. (Inside Climate News) The US could see as many as 80 new gas-fired power plants built by 2030. The surge comes as demand for power from data centers, including those powering AI, is ballooning. (Financial Times) Global sales of EVs and plug-in hybrids were up 25% in 2024 from the year before. China, the worlds largest EV market, is a major engine behind the growth. (Reuters) A massive plant to produce low-emissions steel could be in trouble. Steelmaker SSAB has pulled out of talks on federal funding for a plant in Mississippi. (Canary Media) Some solar panel companies have turned to door-to-door sales. Things arent always so sunny for those involved. (Wired)
    0 Comments ·0 Shares ·67 Views
  • Metas new AI model can translate speech from more than 100 languages
    www.technologyreview.com
    Meta has released a new AI model that can translate speech from 101 different languages. It represents a step toward real-time, simultaneous interpretation, where words are translated as soon as they come out of someones mouth. Typically, translation models for speech use a multistep approach. First they translate speech into text. Then they translate that text into text in another language. Finally, that translated text is turned into speech in the new language. This method can be inefficient, and at each step, errors and mistranslations can creep in. But Metas new model, called SeamlessM4T, enables more direct translation from speech in one language to speech in another. The model is described in a paper published today in Nature. Seamless can translate text with 23% more accuracy than the top existing models. And although another model, Googles AudioPaLM, can technically translate more languages113 of them, versus 101 for Seamlessit can translate them only into English. SeamlessM4T can translate into 36 other languages. The key is a process called parallel data mining, which finds instances when the sound in a video or audio matches a subtitle in another language from crawled web data. The model learned to associate those sounds in one language with the matching pieces of text in another. This opened up a whole new trove of examples of translations for their model. Meta has done a great job having a breadth of different things they support, like text-to-speech, speech-to-text, even automatic speech recognition, says Chetan Jaiswal, a professor of computer science at Quinnipiac University, who was not involved in the research. The mere number of languages they are supporting is a tremendous achievement. Human translators are still a vital part of the translation process, the researchers say in the paper, because they can grapple with diverse cultural contexts and make sure the same meaning is conveyed from one language into another. This step is important, says Lynne Bowker of the University of Ottawas School of Translation & Interpretation, who didnt work on Seamless. Languages are a reflection of cultures, and cultures have their own ways of knowing things, she says. When it comes to applications like medicine or law, machine translations need to be thoroughly checked by a human, she says. If not, misunderstandings can result. For example, when Google Translate was used to translate public health information about the covid-19 vaccine from the Virginia Department of Health in January 2021, it translated not mandatory in English into not necessary in Spanish, changing the whole meaning of the message. AI models have much more examples to train on in some languages than others. This means current speech-to-speech models may be able to translate a language like Greek into English, where there may be many examples, but cannot translate from Swahili to Greek. The team behind Seamless aimed to solve this problem by pre-training the model on millions of hours of spoken audio in different languages. This pre-training allowed it to recognize general patterns in language, making it easier to process less widely spoken languages because it already had some baseline for what spoken language is supposed to sound like. The system is open-source, which the researchers hope will encourage others to build upon its current capabilities. But some are skeptical of how useful it may be compared with available alternatives. Googles translation model is not as open-source as Seamless, but its way more responsive and fast, and it doesnt cost anything as an academic, says Jaiswal. The most exciting thing about Metas system is that it points to the possibility of instant interpretation across languages in the not-too-distant futurelike the Babel fish in Douglas Adams cult novel The Hitchhiker's Guide to the Galaxy. SeamlessM4T is faster than existing models but still not instant. That said, Meta claims to have a newer version of Seamless thats as fast as human interpreters. While having this kind of delayed translation is okay and useful, I think simultaneous translation will be even more useful, says Kenny Zhu, director of the Arlington Computational Linguistics Lab at the University of Texas at Arlington, who is not affiliated with the new research.
    0 Comments ·0 Shares ·72 Views
  • Fueling the future of digital transformation
    www.technologyreview.com
    In the rapidly evolving landscape of digital innovation, staying adaptable isnt just a strategyits a survival skill. Everybody has a plan until they get punched in the face, says Luis Nio, digital manager for technology ventures and innovation at Chevron, quoting Mike Tyson. Drawing from a career that spans IT, HR, and infrastructure operations across the globe, Nio offers a unique perspective on innovation and how organizational microcultures within Chevron shape how digital transformation evolves. Centralized functions prioritize efficiency, relying on tools like AI, data analytics, and scalable system architectures. Meanwhile, business units focus on simplicity and effectiveness, deploying robotics and edge computing to meet site-specific needs and ensure safety. "From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant," he says. Central to this transformation is the rise of industrial AI. Unlike consumer applications, industrial AI operates in high-stakes environments where the cost of errors can be severe. "The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes," says Nio. "If a machine reacts in ways you don't expect, people could get hurt, and so there's an extra level of care that needs to happen and that we need to think about as we deploy these technologies." Nio highlights Chevrons efforts to use AI for predictive maintenance, subsurface analytics, and process automation, noting that AI sits on top of that foundation of strong data management and robust telecommunications capabilities. As such, AI is not just a tool but a transformation catalyst redefining how talent is managed, procurement is optimized, and safety is ensured. Looking ahead, Nio emphasizes the importance of adaptability and collaboration: Transformation is as much about technology as it is about people. With initiatives like the Citizen Developer Program and Learn Digital, Chevron is empowering its workforce to bridge the gap between emerging technologies and everyday operations using an iterative mindset. Nio is also keeping watch over the convergence of technologies like AI, quantum computing, Internet of Things, and robotics, which hold the potential to transform how we produce and manage energy. "My job is to keep an eye on those developments," says Nio, "to make sure that we're managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective." This episode of Business Lab is produced in association with Infosys Cobalt. Full Transcript Megan Tatum: From MIT Technology Review, I'm Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is digital transformation, from back office operations to infrastructure in the field like oil rigs, companies continue to look for ways to increase profit, meet sustainability goals, and invest in the latest and greatest technology. Two words for you: enabling innovation. My guest is Luis Nio, who is the digital manager of technology ventures, and innovation at Chevron. This podcast is produced in association with Infosys Cobalt. Welcome, Luis. Luis Nio: Thank you, Megan. Thank you for having me. Megan: Thank you so much for joining us. Just to set some context, Luis, you've had a really diverse career at Chevron, spanning IT, HR, and infrastructure operations. I wonder, how have those different roles shaped your approach to innovation and digital strategy? Luis: Thank you for the question. And you're right, my career has spanned many different areas and geographies in the company. It really feels like I've worked for different companies every time I change roles. Like I said, different functions, organizations, locations I've had since here in Houston and in Bakersfield, California and in Buenos Aires, Argentina. From an organizational standpoint, I've seen central teams international service centers, as you mentioned, field infrastructure and operation organizations in our business units, and I've also had corporate function roles. And the reason why I mentioned that diversity is that each one of those looks at digital transformation and innovation through its own lens. From the priority to scale and streamline in central organizations to the need to optimize and simplify out in business units and what I like to call the periphery, you really learn about the concept first off of microcultures and how different these organizations can be even within our own walls, but also how those come together in organizations like Chevron. Over time, I would highlight two things. In central organizations, whether that's functions like IT, HR, or our technical center, we have a central technical center, where we continuously look for efficiencies in scaling, for system architectures that allow for economies of scale. As you can imagine, the name of the game is efficiency. We have also looked to improve employee experience. We want to orchestrate ecosystems of large technology vendors that give us an edge and move the massive organization forward. In areas like this, in central areas like this, I would say that it is data analytics, data science, and artificial intelligence that has become the sort of the fundamental tools to achieve those objectives. Now, if you allow that pendulum to swing out to the business units and to the periphery, the name of the game is effectiveness and simplicity. The priority for the business units is to find and execute technologies that help us achieve the local objectives and keep our people safe. Especially when we are talking about our manufacturing environments where there's risk for our folks. In these areas, technologies like robotics, the Internet of Things, and obviously edge computing are currently the enablers of information. I wouldn't want to miss the opportunity to say that both of those, let's call it, areas of the company, rely on the same foundation and that is a foundation of strong data management, of strong network and telecommunications capabilities because those are the veins through which the data flows and everything relies on data. In my experience, this pendulum also drives our technology priorities and our technology strategy. From a digital transformation standpoint, what I have learned is that you have to tie your technology to what outcomes drive results for both areas, but you have to allow yourself to be flexible, to be nimble, and to understand that change is constant. If you are deploying something in the center and you suddenly realize that some business unit already has a solution, you cannot just say, let's shut it down and go with what I said. You have to adapt, you have to understand behavioral change management and you really have to make sure that change and adjustments are your bread and butter. I don't know if you know this, Megan, but there's a popular fight happening this weekend with Mike Tyson and he has a saying, and that is everybody has a plan until they get punched in the face. And what he's trying to say is you have to be adaptable. The plan is good, but you have to make sure that you remain agile. Megan: Yeah, absolutely. Luis: And then I guess the last lesson really quick is about risk management or maybe risk appetite. Each group has its own risk appetite depending on the lens or where they're sitting, and this may create some conflict between organizations that want to move really, really fast and have urgency and others that want to take a step back and make sure that we're doing things right at the balance. I think that at the end, I think that's a question for leadership to make sure that they have a pulse on our ability to change. Megan: Absolutely, and you've mentioned a few different elements and technologies I'd love to dig into a bit more detail on. One of which is artificial intelligence because I know Chevron has been exploring AI for several years now. I wonder if you could tell us about some of the AI use cases it's working on and what frameworks you've developed for effective adoption as well. Luis: Yeah, absolutely. This is the big one, isn't it? Everybody's talking about AI. As you can imagine, the focus in our company is what is now being branded as industrial AI. That's really a simple term to explain that AI is being applied to industrial and manufacturing settings. And like other AI, and as I mentioned before, the foundation remains data. I want to stress the importance of data here. One of the differences however is that in the case of industrial AI, data comes from a variety of sources. Some of them are very critical. Some of them are non-critical. Sources like operating technologies, process control networks, and SCADA, all the way to Internet of Things sensors or industrial Internet of Things sensors, and unstructured data like engineering documentation and IT data. These are massive amounts of information coming from different places and also from different security structures. The complexity of industrial AI is considerably higher than what I would call consumer or productivity AI. Megan: Right. Luis: The wealth of potential information needs to be contextualized, modeled, and governed because of the safety of those underlying processes. When you're in an industrial setting, if a machine reacts in ways you don't expect, people could get hurt, and so there's an extra level of care that needs to happen and that we need to think about as we deploy these technologies. AI sits on top of that foundation and it takes different shapes. It can show up as a copilot like the ones that have been popularized recently, or it can show up as agentic AI, which is something that we're looking at closely now. And agentic AI is just a term to mean that AI can operate autonomously and can use complex reasoning to solve multistep problems in an industrial setting. So with that in mind, going back to your question, we use both kinds of AI for multiple use cases, including predictive maintenance, subsurface analytics, process automation, and workflow optimization, and also end-user productivity. Each one of those use cases obviously needs specific objectives that the business is looking at in each area of the value chain. In predictive maintenance, for example, we monitor and we analyze equipment health, we prevent failures, and we allow for preventive maintenance and reduced downtime. The AI helps us understand when machinery needs to be maintained in order to prevent failure instead of just waiting for it to happen. In subsurface analysis, we're exploring AI to develop better models of hydrocarbon reservoirs. We are exploring AI to forecast geomechanical models and to capture and understand data from fiber optic sensing. Fiber optic sensing is a capability that has proven very valuable to us, and AI is helping us make sense of the wealth of information that comes out of the whole, as we like to say. Of course, we don't do this alone. We partner with many third-party organizations, with vendors, and with people inside subject matter experts inside of Chevron to move the projects forward. There are several other areas beyond industrial AI that we are looking at. AI really is a transformation catalyst, and so areas like finance and law and procurement and HR, we're also doing testing in those corporate areas. I can tell you that I've been part of projects in procurement, in HR. When I was in HR we ran a pretty amazing effort in partnership with a third-party company, and what they do is they seek to transform the way we understand talent, and the way they do that is they are trying to provide data-driven frameworks to make talent decisions. And so they redefine talent by framing data in the form of skills, and as they do this, they help de-bias processes that are usually or can be usually prone to unconscious biases and perspectives. It really is fascinating to think of your talent-based skills and to start decoupling them from what we know since the industrial era began, which is people fit in jobs. Now the question is more the other way around. How can jobs adapt to people's skills? And then in procurement, AI is basically helping us open the aperture to a wider array of vendors in an automated fashion that makes us better partners. It's more cost-effective. It's really helpful. Before I close here, you did reference frameworks, so the framework of industrial AI versus what I call productivity AI, the understanding of the use cases. All of this sits on top of our responsible AI frameworks. We have set up a central enterprise AI organization and they have really done a great job in developing key areas of responsible AI as well as training and adoption frameworks. This includes how to use AI, how not to use AI, what data we can share with the different GPTs that are available to us. We are now members of organizations like the Responsible AI Institute. This is an organization that fosters the safe use of AI and trustworthy AI. But our own responsible AI framework, it involves four pillars. The first one is the principles, and this is how we make sure we continue to stay aligned with the values that drive this company, which we call The Chevron Way. It includes assessment, making sure that we evaluate these solutions in proportion to impact and risk. As I mentioned, when you're talking about industrial processes, people's lives are at stake. And so we take a very close look at what we are putting out there and how we ensure that it keeps our people safe. It includes education, I mentioned training our people to augment their capabilities and reinforcing responsible principles, and the last of the four is governance oversight and accountability through control structures that we are putting in place. Megan: Fantastic. Thank you so much for those really fascinating specific examples as well. It's great to hear about. And digital transformation, which you did touch on briefly, has become critical of course to enable business growth and innovation. I wonder what has Chevron's digital transformation looked like and how has the shift affected overall operations and the way employees engage with technology as well? Luis: Yeah, yeah. That's a really good question. The term digital transformation is interpreted in many different ways. For me, it really is about leveraging technology to drive business results and to drive business transformation. We usually tend to specify emerging technology as the catalyst for transformation. I think that is okay, but I also think that there are ways that you can drive digital transformation with technology that's not necessarily emerging but is being optimized, and so under this umbrella, we include everything from our Citizen Developer Program to complex industry partnerships that help us maximize the value of data. The Citizen Developer Program has been very successful in helping bridge the gap between our technical software engineer and software development practices and people who are out there doing the work, getting familiar, and demystifying the way to build solutions. I do believe that transformation is as much about technology as it is about people. And so to go back to the responsible AI framework, we are actively training and upskilling the workforce. We created a program called Learn Digital that helps employees embrace the technologies. I mentioned the concept of demystifying. It's really important that people don't fall into the trap of getting scared by the potential of the technology or the fact that it is new and we help them and we give them the tools to bridge the change management gap so they can get to use them and get the most out of them. At a high level, our transformation has followed the cyclical nature that pretty much any transformation does. We have identified the data foundations that we need to have. We have understood the impact of the processes that we are trying to digitize. We organize that information, then we streamline and automate processes, we learn, and now machines learn and then we do it all over again. And so this cyclical mindset, this iterative mindset has really taken hold in our culture and it has made us a little bit better at accepting the technologies that are driving the change. Megan: And to look at one of those technologies in a bit more detail, cloud computing has revolutionized infrastructure across industries. But there's also a pendulum ship now toward hybrid and edge computing models. How is Chevron balancing cloud, hybrid, and edge strategies for optimal performance as well? Luis: Yeah, that's a great question and I think you could argue that was the genesis of the digital transformation effort. It's been a journey for us and it's a journey that I think we're not the only ones that may have started it as a cost savings and storage play, but then we got to this ever-increasing need for multiple things like scaling compute power to support large language models and maximize how we run complex models. There's an increasing need to store vast amounts of data for training and inference models while we improve data management and, while we predict future needs. There's a need for the opportunity to eliminate hardware constraints. One of the promises of cloud was that you would be able to ramp up and down depending on your compute needs as projects demanded. And that hasn't stopped, that has only increased. And then there's a need to be able to do this at a global level. For a company like ours that is distributed across the globe, we want to do this everywhere while actively managing those resources without the weight of the infrastructure that we used to carry on our books. Cloud has really helped us change the way we think about the digital assets that we have. It's important also that it has created this symbiotic need to grow between AI and the cloud. So you don't have the AI without the cloud, but now you don't have the cloud without AI. In reality, we work on balancing the benefits of cloud and hybrid and edge computing, and we keep operational efficiency as our North Star. We have key partnerships in cloud, that's something that I want to make sure I talk about. Microsoft is probably the most strategic of our partnerships because they've helped us set our foundation for cloud. But we also think of the convenience of hybrid through the lens of leveraging a convenient, scalable public cloud and a very secure private cloud that helps us meet our operational and safety needs. Edge computing fills the gap or the need for low latency and real-time data processing, which are critical constraints for decision-making in most of the locations where we operate. You can think of an offshore rig, a refinery, an oil rig out in the field, and maybe even not-so-remote areas like here in our corporate offices. Putting that compute power close to the data source is critical. So we work and we partner with vendors to enable lighter compute that we can set at the edge and, I mentioned the foundation earlier, faster communication protocols at the edge that also solve the need for speed. But it is important to remember that you don't want to think about edge computing and cloud as separate things. Cloud supports edge by providing centralized management by providing advanced analytics among others. You can train models in the cloud and then deploy them to edge devices, keeping real-time priorities in mind. I would say that edge computing also supports our cybersecurity strategy because it allows us to control and secure sensitive environments and information while we embed machine learning and AI capabilities out there. So I have mentioned use cases like predictive maintenance and safety, those are good examples of areas where we want to make sure our cybersecurity strategy is front and center. When I was talking about my experience I talked about the center and the edge. Our strategy to balance that pendulum relies on flexibility and on effective asset management. And so making sure that our cloud reflects those strategic realities gives us a good footing to achieve our corporate objectives. Megan: As you say, safety is a top priority. How do technologies like the Internet of Things and AI help enhance safety protocols specifically too, especially in the context of emissions tracking and leak detection? Luis: Yeah, thank you for the question. Safety is the most important thing that we think and talk about here at Chevron. There is nothing more important than ensuring that our people are safe and healthy, so I would break safety down into two. Before I jump to emissions tracking and leak detection, I just want to make a quick point on personal safety and how we leverage IoT and AI to that end. We use sensing capabilities that help us keep workers out of harm's way, and so things like computer vision to identify and alert people who are coming into safety areas. We also use computer vision, for example, to identify PPE requirementspersonal protective equipment requirementsand so if there are areas that require a certain type of clothing, a certain type of identification, or a hard hat, we are using technologies that can help us make sure people have that before they go into a particular area. We're also using wearables. Wearables help us in one of the use cases is they help us track exhaustion and dehydration in locations where that creates inherent risk, and so locations that are very hot, whether it's because of the weather or because they are enclosed, we can use wearables that tell us how fast the person's getting dehydrated, what are the levels of liquid or sodium that they need to make sure that they're safe or if they need to take a break. We have those capabilities now. Going back to emissions tracking and leak detection, I think it's actually the combination of IoT and AI that can transform how we prevent and react to those. In this case, we also deploy sensing capabilities. We use things like computer vision, like infrared capabilities, and we use others that deliver data to the AI models, which then alert and enable rapid response. The way I would explain how we use IoT and AI for safety, whether it's personnel safety or emissions tracking and leak detection, is to think about sensors as the extension of human ability to sense. In some cases, you could argue it's super abilities. And so if you think of sight normally you would've had supervisors or people out there that would be looking at the field and identifying issues. Well, now we can use computer vision with traditional RGB vision, we can use them with infrared, we can use multi-angle to identify patterns, and have AI tell us what's going on. If you keep thinking about the human senses, that's sight, but you can also use sound through ultrasonic sensors or microphone sensors. You can use touch through vibration recognition and heat recognition. And even more recently, this is something that we are testing more recently, you can use smell. There are companies that are starting to digitize smell. Pretty exciting, also a little bit crazy. But it is happening. And so these are all tools that any human would use to identify risk. Well, so now we can do it as an extension of our human abilities to do so. This way we can react much faster and better to the anomalies. A specific example with methane. We have a simple goal with methane, we want to keep methane in the pipe. Once it's out, it's really hard or almost impossible to take it back. Over the last six to seven years, we have reduced our methane intensity by over 60% and we're leveraging technology to achieve that. We have deployed a methane detection program. We have trialed over 10 to 15 advanced methane detection technologies. A technology that I have been looking at recently is called Aquanta Vision. This is a company supported by an incubator program we have called Chevron Studio. We did this in partnership with the National Renewable Energy Laboratory, and what they do is they leverage optical gas imaging to detect methane effectively and to allow us to prevent it from escaping the pipe. So that's just an example of the technologies that we're leveraging in this space. Megan: Wow, that's fascinating stuff. And on emissions as well, Chevron has made significant investments in new energy technologies like hydrogen, carbon capture, and renewables. How do these technologies fit into Chevron's broader goal of reducing its carbon footprint? Luis: This is obviously a fascinating space for us, one that is ever-changing. It is honestly not my area of expertise. But what I can say is we truly believe we can achieve high returns and lower carbon, and that's something that we communicate broadly. A few years ago, I believe it was 2021, we established our Chevron New Energies company and they actively explore lower carbon alternatives including hydrogen, renewables, and carbon capture offsets. My area, the digital area, and the convergence between digital technologies and the technical sciences will enable the techno-commercial viability of those business lines. Thinking about carbon capture, is something that we've done for a long time. We have decades of experience in carbon capture technologies across the world. One of our larger projects, the Gorgon Project in Australia, I think they've captured something between 5 and 10 million tons of CO2 emissions in the past few years, and so we have good expertise in that space. But we also actively partner in carbon capture. We have joined hubs of carbon capture here in Houston, for example, where we investing in companies like there's a company called Carbon Clean, a company called Carbon Engineering, and one called Svante. I'm familiar with these names because the corporate VC team is close to me. These companies provide technologies for direct air capture. They provide solutions for hard-to-abate industries. And so we want to keep an eye on these emerging capabilities and make use of them to continuously lower our carbon footprint. There are two areas here that I would like to talk about. Hydrogen first. This is another area that we're familiar with. Our plan is to build on our existing assets and capabilities to deliver a large-scale hydrogen business. Since 2005, I think we've been doing retail hydrogen, and we also have several partnerships there. In renewables, we are creating a range of fuels for different transportation types. We use diesel, bio-based diesel, we use renewable natural gas, we use sustainable aviation fuel. Yeah, so these are all areas of importance to us. They're emerging business lines that are young in comparison to the rest of our company. We've been a company for 140 years plus, and this started in 2021, so you can imagine how steep that learning curve is. I mentioned how we leverage our corporate venture capital team to learn and to keep an eye out on what are these emerging trends and technologies that we want to learn about. They leverage two things. They leverage a core fund, which is focused on areas that can seek innovation for our core business for the title. And we have a separate future energy fund that explores areas that are emerging. Not only do they invest in places like hydrogen, carbon capture, and renewables, but they also may invest in other areas like wind and geothermal and nuclear capability. So we constantly keep our eyes open for these emerging technologies. Megan: I see. And I wonder if you could share a bit more actually about Chevron's role in driving sustainable business innovation. I'm thinking of initiatives like converting used cooking oil into biodiesel, for example. I wonder how those contribute to that overall goal of creating a circular economy. Luis: Yeah, this is fascinating and I was so happy to learn a little bit more about this year when I had the chance to visit our offices in Iowa. I'll get into that in a second. But happy to talk about this, again with the caveat that it's not my area of expertise. Megan: Of course. Luis: In the case of biodiesel, we acquired a company called REG in 2022. They were one of the founders of the renewable fuels industry, and they honestly do incredible work to create energy through a process, I forget the name of the process to be honest. But at the most basic level what they do is they prepare feedstocks that come from different types of biomass, you mentioned cooking oils, there's also soybeans, there's animal fats. And through various chemical reactions, what they do is convert components of the feedstock into biodiesel and glycerin. After that process, what they do is they separate un-reactive methanol, which is recovered and recycled into the process, and the biodiesel goes through a final processing to make sure that it meets the standards necessary to be commercialized. What REG has done is it has boosted our knowledge as a broader organization on how to do this better. They continuously look for bio-feedstocks that can help us deliver new types of energy. I had mentioned bio-based diesel. One of the areas that we're very focused on right now is sustainable aviation fuel. I find that fascinating. The reason why this is working and the reason why this is exciting is because they brought this great expertise and capability into Chevron. And in turn, as a larger organization, we're able to leverage our manufacturing and distribution capabilities to continue to provide that value to our customers. I mentioned that I learned a little bit more about this this year. I was lucky earlier in the year I was able to visit our REG offices in Ames, Iowa. That's where they're located. And I will tell you that the passion and commitment that those people have for the work that they do was incredibly energizing. These are folks who have helped us believe, really, that our promise of lower carbon is attainable. Megan: Wow. Sounds like there's some fascinating work going on. Which brings me to my final question. Which is sort of looking ahead, what emerging technologies are you most excited about and how do you see them impacting both Chevron's core business and the energy sector as a whole as well? Luis: Yeah, that's a great question. I have no doubt that the energy business is changing and will continue to change only faster, both our core business as well as the future energy, or the way it's going to look in the future. Honestly, in my line of work, I come across exciting technology every day. The obvious answers are AI and industrial AI. These are things that are already changing the way we live without a doubt. You can see it in people's productivity. You can see it in how we optimize and transform workflows. AI is changing everything. I am actually very, very interested in IoT, in the Internet of Things, and robotics, the ability to protect humans in high-risk environments, like I mentioned, is critical to us, the opportunity to prevent high-risk events and predict when they're likely to happen. This is pretty massive, both for our productivity objectives as well as for our lower carbon objectives. If we can predict when we are at risk of particular events, we could avoid them altogether. As I mentioned before, this ubiquitous ability to sense our surroundings is a capability that our industry and I'm going to say humankind, is only beginning to explore. There's another area that I didn't talk too much about, which I think is coming, and that is quantum computing. Quantum computing promises to change the way we think of compute power and it will unlock our ability to simulate chemistry, to simulate molecular dynamics in ways we have not been able to do before. We're working really hard in this space. When I say molecular dynamics, think of the way that we produce energy today. It is all about the molecule and understanding the interactions between hydrocarbon molecules and the environment. The ability to do that in multi-variable systems is something that quantum, we believe, can provide an edge on, and so we're working really hard in this space. Yeah, there are so many, and having talked about all of them, AI, IoT, robotics, quantum, the most interesting thing to me is the convergence of all of them. If you think about the opportunity to leverage robotics, but also do it as the machines continue to control limited processes and understand what it is they need to do in a preventive and predictive way, this is such an incredible potential to transform our lives, to make an impact in the world for the better. We see that potential. My job is to keep an eye on those developments, to make sure that we're managing these things responsibly and the things that we test and trial and the things that we deploy, that we maintain a strict sense of responsibility to make sure that we keep everyone safe, our employees, our customers, and also our stakeholders from a broader perspective. Megan: Absolutely. Such an important point to finish on. And unfortunately, that is all the time we have for today, but what a fascinating conversation. Thank you so much for joining us on the Business Lab, Luis. Luis: Great to talk to you. Megan: Thank you so much. That was Luis Nio, who is the digital manager of technology ventures and innovation at Chevron, who I spoke with today from Brighton, England. That's it for this episode of Business Lab. I'm Megan Tatum, I'm your host and a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts, and if you enjoyed this episode, we really hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thank you so much for listening.
    0 Comments ·0 Shares ·83 Views
  • The Download: Chinas marine ranches, and fast-learning robots
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. China wants to restore the sea with high-tech marine ranches A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex. Genghai is in fact an unusual tourist destination, one that breeds 200,000 high-quality marine fish each year. The vast majority are released into the ocean as part of a process known as marine ranching.The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.Matthew Ponsford This story is from the latest print edition of MIT Technology Reviewits all about the exciting breakthroughs happening in the world right now. If you dont already, subscribe to receive future copies. Fast-learning robots: 10 Breakthrough Technologies 2025 Generative AI is causing a paradigm shift in how robots are trained. Its now clear how we might finally build the sort of truly capable robots that have for decades remained the stuff of science fiction. A few years ago, roboticists began marveling at the progress being made in large language models. Makers of those models could feed them massive amounts of textbooks, poems, manualsand then fine-tune them to generate text based on prompts. Its one thing to use AI to create sentences on a screen, but another thing entirely to use it to coach a physical robot in how to move about and do useful things. Now, roboticists have made major breakthroughs in that pursuit. Read the full story. James O'Donnell Fast-learning robots is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 US regulators are suing Elon Musk| For allegedly violating securities law when he bought Twitter in 2022. (NYT $)+ The case claims that Musk continued to buy shares at artificially low prices. (FT $)+ Musk is unlikely to take it lying down. (Politico)2 SpaceX has launched two private missions to the moon Falling debris from the rockets has forced Qantas to delay flights. (The Guardian)+ The airline has asked for more precise warnings around future launches. (Semafor)+ Space startups are on course for a funding windfall. (Reuters)+ Whats next for NASAs giant moon rocket? (MIT Technology Review)3 Home security cameras are capturing homes burning down in LA Residents have remotely tuned into live footage of their own homes burning. (WP $)+ Californias water scarcity is only going to get worse. (Vox)+ How Los Angeles can rebuild in the wake of the devastation. (The Atlantic $) 4 ChatGPT is about to get much more personal Including reminding you about walking the dog. (Bloomberg $)5 Inside the $30 million campaign to liberate social media from billionaires Free Our Feeds wants to restructure platforms around open-source tech. (Insider $)6 How to avoid getting sick right now The Atlantic $) + But coughs and sneezes could be the least of our problems. (The Guardian)7 The US and China are still collaborating on AI researchDespite rising tensions between the countries. (Rest of World) 8 These startups think they have the solution to lonelinessMaking friends isnt always easy, but these companies have some ideas. (NY Mag $) 9 Here are just some of the ways the universe could end Dont say I didnt warn you. (Ars Technica)+ But at least Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)10 AI is inventing impossible languages They could help us learn more about how humans learn. (Quanta Magazine)+ These impossible instruments could change the future of music. (MIT Technology Review) Quote of the day If you can get away with it when its front-page news, why bother to comply at all? Marc Fagel, a former director of the SECs San Francisco office, suggests the agencys decision to sue Elon Musk is intended as a deterrent to others, the Wall Street Journal reports. The big story I took an international trip with my frozen eggs to learn about the fertility industry September 2022Anna Louie Sussman Like me, my eggs were flying economy class. They were ensconced in a cryogenic storage flask packed into a metal suitcase next to Paolo, the courier overseeing their passage from a fertility clinic in Bologna, Italy, to the clinic in Madrid, Spain, where I would be undergoing in vitro fertilization.The shipping of gametes and embryos around the world is a growing part of a booming global fertility sector. As people have children later in life, the need for fertility treatment increases each year.After paying for storage costs for six and four years, respectively, at 40 I was ready to try to get pregnant. Transporting the Bolognese batch served to literally put all my eggs in one basket. Read the full story.We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + We need to save the worlds largest sea star!+ Maybe our little corner of the universe is more special than weve been led to believe after all.+ How the worlds leading anti-anxiety coach overcame her own anxiety.+ Heres how to keep your eyes on the prize in 2025and beyond!
    0 Comments ·0 Shares ·59 Views
  • Training robots in the AI-powered industrial metaverse
    www.technologyreview.com
    Imagine the bustling floors of tomorrows manufacturing plant: Robots, well-versed in multiple disciplines through adaptive AI education, work seamlessly and safely alongside human counterparts. These robots can transition effortlessly between tasksfrom assembling intricate electronic components to handling complex machinery assembly. Each robot's unique education enables it to predict maintenance needs, optimize energy consumption, and innovate processes on the fly, dictated by real-time data analyses and learned experiences in their digital worlds. Training for robots like this will happen in a virtual school, a meticulously simulated environment within the industrial metaverse. Here, robots learn complex skills on accelerated timeframes, acquiring in hours what might take humans months or even years. Beyond traditional programming Training for industrial robots was once like a traditional school: rigid, predictable, and limited to practicing the same tasks over and over. But now were at the threshold of the next era. Robots can learn in virtual classroomsimmersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world can provide an almost limitless training ground that mirrors real factories, warehouses, and production lines, allowing robots to practice tasks, encounter challenges, and develop problem-solving skills. What once took days or even weeks of real-world programming, with engineers painstakingly adjusting commands to get the robot to perform one simple task, can now be learned in hours in virtual spaces. This approach, known as simulation to reality (Sim2Real), blends virtual training with real-world application, bridging the gap between simulated learning and actual performance. Although the industrial metaverse is still in its early stages, its potential to reshape robotic training is clear, and these new ways of upskilling robots can enable unprecedented flexibility. Italian automation provider EPF found that AI shifted the companys entire approach to developing robots. We changed our development strategy from designing entire solutions from scratch to developing modular, flexible components that could be combined to create complete solutions, allowing for greater coherence and adaptability across different sectors, says EPFs chairman and CEO Franco Filippi. Learning by doing AI models gain power when trained on vast amounts of data, such as large sets of labeled examples, learning categories, or classes by trial and error. In robotics, however, this approach would require hundreds of hours of robot time and human oversight to train a single task. Even the simplest of instructions, like grab a bottle, for example, could result in many varied outcomes depending on the bottles shape, color, and environment. Training then becomes a monotonous loop that yields little significant progress for the time invested. Building AI models that can generalize and then successfully complete a task regardless of the environment is key for advancing robotics. Researchers from New York University, Meta, and Hello Robot have introduced robot utility models that achieve a 90% success rate in performing basic tasks across unfamiliar environments without additional training. Large language models are used in combination with computer vision to provide continuous feedback to the robot on whether it has successfully completed the task. This feedback loop accelerates the learning process by combining multiple AI techniquesand avoids repetitive training cycles. Robotics companies are now implementing advanced perception systems capable of training and generalizing across tasks and domains. For example, EPF worked with Siemens to integrate visual AI and object recognition into its robotics to create solutions that can adapt to varying product geometries and environmental conditions without mechanical reconfiguration. Learning by imagining Scarcity of training data is a constraint for AI, especially in robotics. However, innovations that use digital twins and synthetic data to train robots have significantly advanced on previously costly approaches. For example, Siemens SIMATIC Robot Pick AI expands on this vision of adaptability, transforming standard industrial robotsonce limited to rigid, repetitive tasksinto complex machines. Trained on synthetic datavirtual simulations of shapes, materials, and environmentsthe AI prepares robots to handle unpredictable tasks, like picking unknown items from chaotic bins, with over 98% accuracy. When mistakes happen, the system learns, improving through real-world feedback. Crucially, this isnt just a one-robot fix. Software updates scale across entire fleets, upgrading robots to work more flexibly and meet the rising demand for adaptive production. Another example is the robotics firm ANYbotics, which generates 3D models of industrial environments that function as digital twins of real environments. Operational data, such as temperature, pressure, and flow rates, are integrated to create virtual replicas of physical facilities where robots can train. An energy plant, for example, can use its site plans to generate simulations of inspection tasks it needs robots to perform in its facilities. This speeds the robots training and deployment, allowing them to perform successfully with minimal on-site setup. Simulation also allows for the near-costless multiplication of robots for training. In simulation, we can create thousands of virtual robots to practice tasks and optimize their behavior. This allows us to accelerate training time and share knowledge between robots, says Pter Fankhauser, CEO and co-founder of ANYbotics. Because robots need to understand their environment regardless of orientation or lighting, ANYbotics and partner Digica created a method of generating thousands of synthetic images for robot training. By removing the painstaking work of collecting huge numbers of real images from the shop floor, the time needed to teach robots what they need to know is drastically reduced. Similarly, Siemens leverages synthetic data to generate simulated environments to train and validate AI models digitally before deployment into physical products. By using synthetic data, we create variations in object orientation, lighting, and other factors to ensure the AI adapts well across different conditions, says Vincenzo De Paola, project lead at Siemens. We simulate everything from how the pieces are oriented to lighting conditions and shadows. This allows the model to train under diverse scenarios, improving its ability to adapt and respond accurately in the real world. Digital twins and synthetic data have proven powerful antidotes to data scarcity and costly robot training. Robots that train in artificial environments can be prepared quickly and inexpensively for wide varieties of visual possibilities and scenarios they may encounter in the real world. We validate our models in this simulated environment before deploying them physically, says De Paola. This approach allows us to identify any potential issues early and refine the model with minimal cost and time. This technologys impact can extend beyond initial robot training. If the robots real-world performance data is used to update its digital twin and analyze potential optimizations, it can create a dynamic cycle of improvement to systematically enhance the robots learning, capabilities, and performance over time. The well-educated robot at work With AI and simulation powering a new era in robot training, organizations will reap the benefits. Digital twins allow companies to deploy advanced robotics with dramatically reduced setup times, and the enhanced adaptability of AI-powered vision systems makes it easier for companies to alter product lines in response to changing market demands. The new ways of schooling robots are transforming investment in the field by also reducing risk. Its a game-changer, says De Paola. Our clients can now offer AI-powered robotics solutions as services, backed by data and validated models. This gives them confidence when presenting their solutions to customers, knowing that the AI has been tested extensively in simulated environments before going live. Filippi envisions this flexibility enabling todays robots to make tomorrows products. The need in one or two years time will be for processing new products that are not known today. With digital twins and this new data environment, it is possible to design today a machine for products that are not known yet, says Filippi. Fankhauser takes this idea a step further. I expect our robots to become so intelligent that they can independently generate their own missions based on the knowledge accumulated from digital twins, he says. Today, a human still guides the robot initially, but in the future, theyll have the autonomy to identify tasks themselves. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
    0 Comments ·0 Shares ·48 Views
  • Heres our forecast for AI this year
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In December, our small but mighty AI reporting team was asked by our editors to make a prediction: Whats coming next for AI? In 2024, AI contributed both to Nobel Prizewinning chemistry breakthroughs and a mountain of cheaply made content that few people asked for but that nonetheless flooded the internet. Take AI-generated Shrimp Jesus images, among other examples. There was also a spike in greenhouse-gas emissions last year that can be attributed partly to the surge in energy-intensive AI. Our team got to thinking about how all of this will shake out in the year to come. As we look ahead, certain things are a given. We know that agentsAI models that do more than just converse with you and can actually go off and complete tasks for youare the focus of many AI companies right now. Building them will raise lots of privacy questions about how much of our data and preferences were willing to give up in exchange for tools that will (allegedly) save us time. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. We instead wanted to focus on less obvious predictions. Mine were about how AI companies that previously shunned work in defense and national security might be tempted this year by contracts from the Pentagon, and how Donald Trumps attitudes toward China could escalate the global race for the best semiconductors. Read the full list. Whats not evident in that story is that the other predictions were not so clear-cut. Arguments ensued about whether or not 2025 will be the year of intimate relationships with chatbots, AI throuples, or traumatic AI breakups. To witness the fallout from our teams lively debates (and hear more about what didnt make the list), you can join our upcoming LinkedIn Live this Thursday, January 16. Ill be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. There are a couple other things Ill be watching closely in 2025. One is how little the major AI playersnamely OpenAI, Microsoft, and Googleare disclosing about the environmental burden of their models. Lots of evidence suggests that asking an AI model like ChatGPT about knowable facts, like the capital of Mexico, consumes much more energy (and releases far more emissions) than simply asking a search engine. Nonetheless, OpenAIs Sam Altman in recent interviews has spoken positively about the idea of ChatGPT replacing the googling that weve all learned to do in the past two decades. Its already happening, in fact. The environmental cost of all this will be top of mind for me in 2025, as will the possible cultural cost. We will go from searching for information by clicking links and (hopefully) evaluating sources to simply reading the responses that AI search engines serve up for us. As our editor in chief, Mat Honan, said in his piece on the subject, Who wants to have to learn when you can just know? Now read the rest of The Algorithm Deeper Learning Whats next for our privacy? The US Federal Trade Commission has taken a number of enforcement actions against data brokers, some of which have tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent. Though limited in nature, these actions may offer some new and improved protections for Americans personal information. Why it matters: A consensus is growing that Americans need better privacy protectionsand that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. Unfortunately, thats not going to happen anytime soon. Enforcement actions from agencies like the FTC might be the next best thing in the meantime. Read more in Eileen Guos excellent story here. Bits and Bytes Meta trained its AI on a notorious piracy database New court records, Wired reports, reveal that Meta used a notorious so-called shadow library of pirated books that originated in Russia to train its generative AI models. (Wired) OpenAIs top reasoning model struggles with the NYT Connections game The game requires players to identify how groups of words are related. OpenAIs o1 reasoning model had a hard time. (Mind Matters) Anthropics chief scientist on 5 ways agents will be even better in 2025 The AI company Anthropic is now worth $60 billion. The companys cofounder and chief scientist, Jared Kaplan, shared how AI agents will develop in the coming year. (MIT Technology Review) A New York legislator attempts to regulate AI with a new bill This year, a high-profile bill in California to regulate the AI industry was vetoed by Governor Gavin Newsom. Now, a legislator in New York is trying to revive the effort in his own state. (MIT Technology Review)
    0 Comments ·0 Shares ·100 Views
  • The Download: the future of nuclear power, and fact checking Mark Zuckerberg
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Whats next for nuclear power While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology. As electricity demand rises around the world for everything from electric vehicles to data centers, theres renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase. Heres what to expect next for the industry.Casey Crownhart This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Mark Zuckerberg and the power of the media On Tuesday last week, Meta CEO Mark Zuckerberg announced that Meta is done with fact checking in the US, that it will roll back restrictions on speech, and is going to start showing people more tailored political content in their feeds. While the end of fact checking has gotten most of the attention, the changes to its hateful speech policy are also notable. Zuckerbergwhose previous self-acknowledged mistakes include the Cambridge Analytica data scandal, and helping to fuel a genocide in Myanmarpresented Facebooks history of fact-checking and content moderation as something he was pressured into doing by the government and media. The reality, of course, is that these were his decisions. He famously calls the shots, and always has. Read the full story. Mat Honan This story first appeared in The Debrief, providing a weekly take on the tech news that really matters and links to stories we loveas well as the occasional recommendation.Sign up to receive it in your inbox every Friday. Heres our forecast for AI this year In December, our small but mighty AI reporting team was asked by our editors to make a prediction: Whats coming next for AI? As we look ahead, certain things are a given. We know that agentsAI models that do more than just converse with you and can actually go off and complete tasks for youare the focus of many AI companies right now. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. However, the other predictions were not so clear-cut. Read the full story. James O'Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. To witness the fallout from the AI teams lively debates (and hear more about what didnt make the list), you can join our upcoming LinkedIn Live this Thursday, January 16 at 12.30pm ET. James will be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 China is considering selling TikTok to Elon Musk But its unclear how likely an outcome that really is. (Bloomberg $)+ Its certainly one way of allowing TikTok to remain in the US. (WSJ $)+ For what its worth, TikTok has dismissed the report as pure fiction. (Variety $)+ Xiaohongshu, also known as RedNote, is dealing with an influx of American users. (WP $)2 Amazon drivers are still delivering packages amid LA fires They're dropping off parcels even after neighborhoods have been instructed to evacuate. (404 Media)3 Alexa is getting a generative AI makeoverAmazon is racing to turn its digital assistant into an AI agent. (FT $) + What are AI agents? (MIT Technology Review)4 Animal manure is a major climate problem Unfortunately, turning it into energy is easier said than done. (Vox)+ How poop could help feed the planet. (MIT Technology Review) 5 Power lines caused many of Californias worst fires Thousands of blazes have been traced back to power infrastructure in recent decades. (NYT $)+ Why some homes manage to withstand wildfires. (Bloomberg $)+ The quest to build wildfire-resistant homes. (MIT Technology Review)6 Barcelona is a hotbed of spyware startups Researchers are increasingly concerned about its creep across Europe. (TechCrunch)7 Mastodons founder doesnt want to follow in Mark Zuckerbergs footstepsEugen Rochko has restructured the company to ensure it could never be controlled by a single individual. (Ars Technica) + Hes made it clear he doesnt want to end up like Elon Musk, either. (Engadget)8 Spare a thought for this Welsh would-be crypto millionaireHis 11-year quest to recover an old hard drive has come to a disappointing end. (Wired $) 9 The unbearable banality of internet lexicon Its giving nonsense. (The Atlantic $)10 You never know whether youll get to see the northern lights or not AI could help us to predict when theyll occur more accurately. (Vice)+ Digital pictures make the lights look much more defined than they actually are. (NYT $)Quote of the day Cutting fact checkers from social platforms is like disbanding your fire department. Alan Duke, co-founder of fact-checking outlet Lead Stories, criticizes Metas decision to ax its US-based fact checkers as the groups attempt to slow viral misinformation spreading about the wildfires in California, CNN reports. The big story The world is moving closer to a new cold war fought with authoritarian tech September 2022Despite President Bidens assurances that the US is not seeking a new cold war, one is brewing between the worlds autocracies and democraciesand technology is fueling it. Authoritarian states are following Chinas lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.And while democracies also use massive amounts of surveillance technology, its the tech trade relationships between authoritarian countries thats enabling the rise of digitally enabled social control. Read the full story.Tate Ryan-Mosley We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Before indie sleaze, there was DIY counterculture site Buddyhead.+ Did you know black holes dont actually suck anything in at all?+ Science fiction is stuck in a loop, and cant seem to break its fixation with cyberpunk.+ Every now and again, TV produces a perfect episode. Heres eight of them.
    0 Comments ·0 Shares ·83 Views
More Stories