MIT Technology Review
MIT Technology Review
Our in-depth reporting on innovation reveals and explains what’s really happening now to help you know what’s coming next. Get our journalism: http://technologyreview.com/newsletters.
  • 192 oameni carora le place asta
  • 640 Postari
  • 2 Fotografii
  • Video
  • News
Căutare
Recent Actualizat
  • Tech billionaires are making a risky bet with humanity’s future

    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future. 

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.

    While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction. 

    “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.”

    “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow. 

    A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in? 

    I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization. 

    What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry. 

    Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share? 

    They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity. 

    In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics.

    You argue that the Singularity is the purest expression of the ideology of technological salvation. How so?

    Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end.

    The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen. 

    Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth. 

    Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed?

    Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law.

    “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.”

    My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over. 

    These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins?

    You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care. 

    I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control. 

    You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is? 

    I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals.

    More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that?

    It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast. 

    This interview was edited for length and clarity.

    Bryan Gardiner is a writer based in Oakland, California. 
    #tech #billionaires #are #making #risky
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California.  #tech #billionaires #are #making #risky
    WWW.TECHNOLOGYREVIEW.COM
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, [and] to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideology [a mashup of countercultural, libertarian, and neoliberal values] and through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto” [from 2023] is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Moore [who first articulated it] knew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California. 
    Like
    Love
    Wow
    Sad
    Angry
    535
    2 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: gambling with humanity’s future, and the FDA under Trump

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story.

    —Bryan Gardiner

    This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

    Here’s what food and drug regulation might look like under the Trump administration

    Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them.

    Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI.

    —Jessica Hamzelou

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically. 

    3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day

    “It kind of jams two years of work into two months.”

    —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states.

    One more thing

    The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years.

    But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story.

    —David Rotman

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know.
    #download #gambling #with #humanitys #future
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically.  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know. #download #gambling #with #humanitys #future
    WWW.TECHNOLOGYREVIEW.COM
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates. (WP $)+ Its core component has been springing small air leaks for months. (Reuters)+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid. (Wired $) 2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA. (Wired $)+ Platforms’ relationships with protest activism has changed drastically. (NY Mag $)  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787. (Ars Technica)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models. (WSJ $)+ The US is cracking down on Huawei’s ability to produce chips. (Bloomberg $)+ What the US-China AI race overlooks. (Rest of World) 5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms. (NYT $)+ Here’s what we know about hurricanes and climate change. (MIT Technology Review) 6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?! (FT $)+ Nothing is safe from the creep of AI, not even playtime. (LA Times $)+ OpenAI has ambitions to reach billions of users. (Bloomberg $) 7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC. (404 Media)+ How do you teach an AI model to give therapy? (MIT Technology Review) 8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad. (Bloomberg $)+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review) 9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable. (Wired $)+ What is vibe coding, exactly? (MIT Technology Review) 10 TikTok really loves hotdogs And who can blame it? (Insider $) Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why? ($)+ Why do dads watch TV standing up? I need to know.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Powering next-gen services with AI in regulated industries 

    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions. 

    “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa. 

    DOWNLOAD THE FULL REPORT

    For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.” 

    But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations. 

    Download the full report.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #powering #nextgen #services #with #regulated
    Powering next-gen services with AI in regulated industries 
    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions.  “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa.  DOWNLOAD THE FULL REPORT For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.”  But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations.  Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #powering #nextgen #services #with #regulated
    WWW.TECHNOLOGYREVIEW.COM
    Powering next-gen services with AI in regulated industries 
    Businesses in highly-regulated industries like financial services, insurance, pharmaceuticals, and health care are increasingly turning to AI-powered tools to streamline complex and sensitive tasks. Conversational AI-driven interfaces are helping hospitals to track the location and delivery of a patient’s time-sensitive cancer drugs. Generative AI chatbots are helping insurance customers answer questions and solve problems. And agentic AI systems are emerging to support financial services customers in making complex financial planning and budgeting decisions.  “Over the last 15 years of digital transformation, the orientation in many regulated sectors has been to look at digital technologies as a place to provide more cost-effective and meaningful customer experience and divert customers from higher-cost, more complex channels of service,” says Peter Neufeld, who leads the EY Studio+ digital and customer experience capability at EY for financial services companies in the UK, Europe, the Middle East, and Africa.  DOWNLOAD THE FULL REPORT For many, the “last mile” of the end-to-end customer journey can present a challenge. Services at this stage often involve much more complex interactions than the usual app or self-service portal can handle. This could be dealing with a challenging health diagnosis, addressing late mortgage payments, applying for government benefits, or understanding the lifestyle you can afford in retirement. “When we get into these more complex service needs, there’s a real bias toward human interaction,” says Neufeld. “We want to speak to someone, we want to understand whether we’re making a good decision, or we might want alternative views and perspectives.”  But these high-cost, high-touch interactions can be less than satisfying for customers when handled through a call center if, for example, technical systems are outdated or data sources are disconnected. Those kinds of problems ultimately lead to the possibility of complaints and lost business. Good customer experience is critical for the bottom line. Customers are 3.8 times more likely to make return purchases after a successful experience than after an unsuccessful one, according to Qualtrics. Intuitive AI-driven systems— supported by robust data infrastructure that can efficiently access and share information in real time— can boost the customer experience, even in complex or sensitive situations.  Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: China’s AI agent boom, and GPS alternatives

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Read the full story.

    —Caiwei Chen

    Inside the race to find GPS alternatives

    Later this month, an inconspicuous 150-kilogram satellite is set to launch into space aboard the SpaceX Transporter 14 mission. Once in orbit, it will test super-accurate next-generation satnav technology designed to make up for the shortcomings of the US Global Positioning System.

    Despite the system’s indispensable nature, the GPS signal is easily suppressed or disrupted by everything from space weather to 5G cell towers to phone-size jammers worth a few tens of dollars. The problem has been whispered about among experts for years, but it has really come to the fore in the last three years, since Russia invaded Ukraine.Now, startup Xona Space Systems wants to create a space-based system that would do what GPS does but better. Read the full story.

    —Tereza Pultarova

    Why doctors should look for ways to prescribe hope

    —Jessica Hamzelou

    This week, I’ve been thinking about the powerful connection between mind and body. Some new research suggests that people with heart conditions have better outcomes when they are more hopeful and optimistic. Hopelessness, on the other hand, is associated with a significantly higher risk of death.

    The findings build upon decades of fascinating research into the phenomenon of the placebo effect. Our beliefs and expectations about a medicinecan change the way it works. The placebo effect’s “evil twin,” the nocebo effect, is just as powerful—negative thinking has been linked to real symptoms.

    Researchers are still trying to understand the connection between body and mind, and how our thoughts can influence our physiology. In the meantime, many are developing ways to harness it in hospital settings. Is it possible for a doctor to prescribe hope? Read the full story.

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Elon Musk threatened to cut off NASA’s use of SpaceX’s Dragon spacecraftHis war of words with Donald Trump is dramatically escalating.+ If Musk actually carried through with his threat, NASA would seriously struggle.+ Silicon Valley is starting to pick sides.+ It appears as though Musk has more to lose from their bruising breakup.2 Apple and Alibaba’s AI rollout in China has been delayedIt’s the latest victim of Trump’s trade war.+ The deal is supposed to support iPhones’ AI offerings in the country.3 X’s new policy blocks the use of its posts to ‘fine-tune or train’ AI modelsUnless companies strike a deal with them, that is.+ The platform could end up striking agreements like Reddit and Google.4 RJK Jr’s new hire is hunting for proof that vaccines cause autismVaccine skeptic David Geier is seeking access to a database he was previously barred from.+ How measuring vaccine hesitancy could help health professionals tackle it.5 Anthropic has launched a new service for the militaryClaude Gov is designed specifically for US defense and intelligence agencies.+ Generative AI is learning to spy for the US military.6 There’s no guarantee your billion-dollar startup won’t failIn fact, one in five of them will.+ Beware the rise of the AI coding startup.7 Walmart’s drone deliveries are taking offIt’s expanding to 100 new US stories in the next year.8 AI might be able to tell us how old the Dead Sea Scrolls really are Models suggest they’re even older than we previously thought.+ How AI is helping historians better understand our past.9 All-in-one super apps are a hit in the Gulf They’re following in China’s footsteps.10 Nintendo’s Switch 2 has revived the midnight launch eventFans queued for hours outside stores to get their hands on the new console.+ How the company managed to dodge Trump’s tariffs.Quote of the day

    “Elon finally found a way to make Twitter fun again.”

    —Dan Pfeiffer, a host of the political podcast Pod America, jokes about Elon Musk and Donald Trump’s ongoing feud in a post on X.

    One more thing

    This rare earth metal shows us the future of our planet’s resources

    We’re in the middle of a potentially transformative moment. Metals discovered barely a century ago now underpin the technologies we’re relying on for cleaner energy, and not having enough of them could slow progress. 

    Take neodymium, one of the rare earth metals. It’s used in cryogenic coolers to reach ultra-low temperatures needed for devices like superconductors and in high-powered magnets that power everything from smartphones to wind turbines. And very soon, demand for it could outstrip supply. What happens then? And what does it reveal about issues across wider supply chains? Read our story to find out.

    —Casey Crownhart

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ Sightings of Bigfoot just happen to correlate with black bear populations? I smell a conspiracy!+ Watch as these symbols magically transform into a pretty impressive Black Sabbath mural.+ Underwater rugby is taking off in the UK.+ Fed up of beige Gen Z trends, TikTok is bringing the 80s back.
    #download #chinas #agent #boom #gps
    The Download: China’s AI agent boom, and GPS alternatives
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Manus has kick-started an AI agent boom in China Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Read the full story. —Caiwei Chen Inside the race to find GPS alternatives Later this month, an inconspicuous 150-kilogram satellite is set to launch into space aboard the SpaceX Transporter 14 mission. Once in orbit, it will test super-accurate next-generation satnav technology designed to make up for the shortcomings of the US Global Positioning System. Despite the system’s indispensable nature, the GPS signal is easily suppressed or disrupted by everything from space weather to 5G cell towers to phone-size jammers worth a few tens of dollars. The problem has been whispered about among experts for years, but it has really come to the fore in the last three years, since Russia invaded Ukraine.Now, startup Xona Space Systems wants to create a space-based system that would do what GPS does but better. Read the full story. —Tereza Pultarova Why doctors should look for ways to prescribe hope —Jessica Hamzelou This week, I’ve been thinking about the powerful connection between mind and body. Some new research suggests that people with heart conditions have better outcomes when they are more hopeful and optimistic. Hopelessness, on the other hand, is associated with a significantly higher risk of death. The findings build upon decades of fascinating research into the phenomenon of the placebo effect. Our beliefs and expectations about a medicinecan change the way it works. The placebo effect’s “evil twin,” the nocebo effect, is just as powerful—negative thinking has been linked to real symptoms. Researchers are still trying to understand the connection between body and mind, and how our thoughts can influence our physiology. In the meantime, many are developing ways to harness it in hospital settings. Is it possible for a doctor to prescribe hope? Read the full story. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk threatened to cut off NASA’s use of SpaceX’s Dragon spacecraftHis war of words with Donald Trump is dramatically escalating.+ If Musk actually carried through with his threat, NASA would seriously struggle.+ Silicon Valley is starting to pick sides.+ It appears as though Musk has more to lose from their bruising breakup.2 Apple and Alibaba’s AI rollout in China has been delayedIt’s the latest victim of Trump’s trade war.+ The deal is supposed to support iPhones’ AI offerings in the country.3 X’s new policy blocks the use of its posts to ‘fine-tune or train’ AI modelsUnless companies strike a deal with them, that is.+ The platform could end up striking agreements like Reddit and Google.4 RJK Jr’s new hire is hunting for proof that vaccines cause autismVaccine skeptic David Geier is seeking access to a database he was previously barred from.+ How measuring vaccine hesitancy could help health professionals tackle it.5 Anthropic has launched a new service for the militaryClaude Gov is designed specifically for US defense and intelligence agencies.+ Generative AI is learning to spy for the US military.6 There’s no guarantee your billion-dollar startup won’t failIn fact, one in five of them will.+ Beware the rise of the AI coding startup.7 Walmart’s drone deliveries are taking offIt’s expanding to 100 new US stories in the next year.8 AI might be able to tell us how old the Dead Sea Scrolls really are Models suggest they’re even older than we previously thought.+ How AI is helping historians better understand our past.9 All-in-one super apps are a hit in the Gulf They’re following in China’s footsteps.10 Nintendo’s Switch 2 has revived the midnight launch eventFans queued for hours outside stores to get their hands on the new console.+ How the company managed to dodge Trump’s tariffs.Quote of the day “Elon finally found a way to make Twitter fun again.” —Dan Pfeiffer, a host of the political podcast Pod America, jokes about Elon Musk and Donald Trump’s ongoing feud in a post on X. One more thing This rare earth metal shows us the future of our planet’s resources We’re in the middle of a potentially transformative moment. Metals discovered barely a century ago now underpin the technologies we’re relying on for cleaner energy, and not having enough of them could slow progress.  Take neodymium, one of the rare earth metals. It’s used in cryogenic coolers to reach ultra-low temperatures needed for devices like superconductors and in high-powered magnets that power everything from smartphones to wind turbines. And very soon, demand for it could outstrip supply. What happens then? And what does it reveal about issues across wider supply chains? Read our story to find out. —Casey Crownhart We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Sightings of Bigfoot just happen to correlate with black bear populations? I smell a conspiracy!+ Watch as these symbols magically transform into a pretty impressive Black Sabbath mural.+ Underwater rugby is taking off in the UK.+ Fed up of beige Gen Z trends, TikTok is bringing the 80s back. #download #chinas #agent #boom #gps
    WWW.TECHNOLOGYREVIEW.COM
    The Download: China’s AI agent boom, and GPS alternatives
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Manus has kick-started an AI agent boom in China Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Read the full story. —Caiwei Chen Inside the race to find GPS alternatives Later this month, an inconspicuous 150-kilogram satellite is set to launch into space aboard the SpaceX Transporter 14 mission. Once in orbit, it will test super-accurate next-generation satnav technology designed to make up for the shortcomings of the US Global Positioning System (GPS). Despite the system’s indispensable nature, the GPS signal is easily suppressed or disrupted by everything from space weather to 5G cell towers to phone-size jammers worth a few tens of dollars. The problem has been whispered about among experts for years, but it has really come to the fore in the last three years, since Russia invaded Ukraine.Now, startup Xona Space Systems wants to create a space-based system that would do what GPS does but better. Read the full story. —Tereza Pultarova Why doctors should look for ways to prescribe hope —Jessica Hamzelou This week, I’ve been thinking about the powerful connection between mind and body. Some new research suggests that people with heart conditions have better outcomes when they are more hopeful and optimistic. Hopelessness, on the other hand, is associated with a significantly higher risk of death. The findings build upon decades of fascinating research into the phenomenon of the placebo effect. Our beliefs and expectations about a medicine (or a sham treatment) can change the way it works. The placebo effect’s “evil twin,” the nocebo effect, is just as powerful—negative thinking has been linked to real symptoms. Researchers are still trying to understand the connection between body and mind, and how our thoughts can influence our physiology. In the meantime, many are developing ways to harness it in hospital settings. Is it possible for a doctor to prescribe hope? Read the full story. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk threatened to cut off NASA’s use of SpaceX’s Dragon spacecraftHis war of words with Donald Trump is dramatically escalating. (WP $)+ If Musk actually carried through with his threat, NASA would seriously struggle. (NYT $)+ Silicon Valley is starting to pick sides. (Wired $)+ It appears as though Musk has more to lose from their bruising breakup. (NY Mag $) 2 Apple and Alibaba’s AI rollout in China has been delayedIt’s the latest victim of Trump’s trade war. (FT $)+ The deal is supposed to support iPhones’ AI offerings in the country. (Reuters) 3 X’s new policy blocks the use of its posts to ‘fine-tune or train’ AI modelsUnless companies strike a deal with them, that is. (TechCrunch)+ The platform could end up striking agreements like Reddit and Google. (The Verge) 4 RJK Jr’s new hire is hunting for proof that vaccines cause autismVaccine skeptic David Geier is seeking access to a database he was previously barred from. (WSJ $)+ How measuring vaccine hesitancy could help health professionals tackle it. (MIT Technology Review) 5 Anthropic has launched a new service for the militaryClaude Gov is designed specifically for US defense and intelligence agencies. (The Verge)+ Generative AI is learning to spy for the US military. (MIT Technology Review) 6 There’s no guarantee your billion-dollar startup won’t failIn fact, one in five of them will. (Bloomberg $)+ Beware the rise of the AI coding startup. (Reuters) 7 Walmart’s drone deliveries are taking offIt’s expanding to 100 new US stories in the next year. (Wired $) 8 AI might be able to tell us how old the Dead Sea Scrolls really are Models suggest they’re even older than we previously thought. (The Economist $)+ How AI is helping historians better understand our past. (MIT Technology Review) 9 All-in-one super apps are a hit in the Gulf They’re following in China’s footsteps. (Rest of World) 10 Nintendo’s Switch 2 has revived the midnight launch eventFans queued for hours outside stores to get their hands on the new console. (Insider $)+ How the company managed to dodge Trump’s tariffs. (The Guardian) Quote of the day “Elon finally found a way to make Twitter fun again.” —Dan Pfeiffer, a host of the political podcast Pod Save America, jokes about Elon Musk and Donald Trump’s ongoing feud in a post on X. One more thing This rare earth metal shows us the future of our planet’s resources We’re in the middle of a potentially transformative moment. Metals discovered barely a century ago now underpin the technologies we’re relying on for cleaner energy, and not having enough of them could slow progress.  Take neodymium, one of the rare earth metals. It’s used in cryogenic coolers to reach ultra-low temperatures needed for devices like superconductors and in high-powered magnets that power everything from smartphones to wind turbines. And very soon, demand for it could outstrip supply. What happens then? And what does it reveal about issues across wider supply chains? Read our story to find out. —Casey Crownhart We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Sightings of Bigfoot just happen to correlate with black bear populations? I smell a conspiracy!+ Watch as these symbols magically transform into a pretty impressive Black Sabbath mural.+ Underwater rugby is taking off in the UK.+ Fed up of beige Gen Z trends, TikTok is bringing the 80s back.
    Like
    Love
    Wow
    Angry
    Sad
    552
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

    There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March. 

    These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

    China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

    For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

    As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

    Set the standard

    It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

    Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

    Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

    “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

    In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

    Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue.

    Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

    What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

    A global address

    Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

    Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

    But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

    Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

    But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

    An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

    Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

    For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

    A super‑app approach

    Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

    ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

    Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

    Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

    Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

    Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

    But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

    Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

    ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

    “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

    Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    #manus #has #kickstarted #agent #boom
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.” #manus #has #kickstarted #agent #boom
    WWW.TECHNOLOGYREVIEW.COM
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    Like
    Love
    Wow
    Sad
    Angry
    421
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: AI’s role in math, and calculating its energy footprint

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    What’s next for AI and math

    The modern world is built on mathematics. Math lets us model complex systems such as the way air flows around an aircraft, the way financial markets fluctuate, and the way blood flows through the heart. Mathematicians have used computers for decades, but the new vision is that AI might help them crack problems that were previously uncrackable.  

    However, there’s a huge difference between AI that can solve the kinds of problems set in high school—math that the latest generation of models has already mastered—and AI that couldsolve the kinds of problems that professional mathematicians spend careers chipping away at. Here are three ways to understand that gulf. 

    —Will Douglas HeavenThis story is from our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

    Inside the effort to tally AI’s energy appetite

    —James O’Donnell

    After working on it for months, my colleague Casey Crownhart and I finally saw our story on AI’s energy and emissions burden go live last week. 

    The initial goal sounded simple: Calculate how much energy is used when we interact with a chatbot, then tally that up to understand why leaders in tech and politics are so keen to harness unprecedented levels of electricity to power AI and reshape our energy grids in the process.It was, of course, not so simple. After speaking with dozens of researchers, we realized that the common understanding of AI’s energy appetite is full of holes. I encourage you to read the full story, which has some incredible graphics to help you understand this topic. But here are three takeaways I have after the project.

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here, and check out the rest of our Power Hungry package about AI here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Elon Musk has turned on Trump He called Trump’s domestic policy agenda a “disgusting abomination.”+ House Speaker Mike Johnson has, naturally, hit back. 2 NASA is in crisisIts budget has been cut by a quarter, and now its new leader has had his nomination revoked.+ What’s next for NASA’s giant moon rocket? 3 Here’s how Big Tech plans to wield AITo build ‘everything apps’ that keep you inside their ecosystem, forever.+ The trouble is, the experience isn’t always slick enough, as Google has discovered with its ‘Ask Photos’ feature.+ How to fight your instinct to blindly trust AI. 4 Meta has signed a 20-year deal to buy nuclear power It’s the latest in a race to try to keep up with AI’s surging energy demands.+ Can nuclear power really fuel the rise of AI?  5 Extreme heat takes a huge toll on people’s mental healthIt’s yet another issue we’re failing to prepare for, as summers get hotter and hotter.+ The quest to protect farmworkers from extreme heat. 6 China’s robotaxi companies are planning to expand in the Middle East And they’re getting a warmer welcome than in the US or Europe.+ China’s EV giants are also betting big on humanoid robots. 7 AI will supercharge hackersThe full impact of new AI techniques is yet to be felt, but experts say it’s only a matter of time.+ Five ways criminals are using AI. 8 It’s an exciting time to be working on Alzheimer’s treatments 12 of them are moving to the final phase of clinical trials this year.+ The innovation that gets an Alzheimer’s drug through the blood-brain barrier. 9 Workers are being subjected to more and more surveillanceNot just in the gig economy either—’bossware’ is increasingly appearing in offices too.10 Noughties nostalgia is rife on TikTokIt was a pretty fun decade, to be fair.Quote of the day

     “This is scientific heaven. Or it used to be.”

    —Tom Rapoport, a 77-year-old Harvard Medical School professor from Germany, expresses his sadness about Trump’s cuts to US science funding to the New York Times. 

    One more thing

    OLCF

    What’s next for the world’s fastest supercomputers

    When the Frontier supercomputer came online in 2022, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillionfloating point operations a second.Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe.But speed itself isn’t the endgame. Researchers hope to pursue previously unanswerable questions about nature—and to design new technologies in areas from transportation to medicine. Read the full story.

    —Sophia Chen

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If tracking tube trains in London is your thing, you’ll love this live map.+ Take a truly bonkers trip down memory lane, courtesy of these FBI artifacts.+ Netflix’s Frankenstein looks pretty intense.+ Why landlines are so darn spooky
    #download #ais #role #math #calculating
    The Download: AI’s role in math, and calculating its energy footprint
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. What’s next for AI and math The modern world is built on mathematics. Math lets us model complex systems such as the way air flows around an aircraft, the way financial markets fluctuate, and the way blood flows through the heart. Mathematicians have used computers for decades, but the new vision is that AI might help them crack problems that were previously uncrackable.   However, there’s a huge difference between AI that can solve the kinds of problems set in high school—math that the latest generation of models has already mastered—and AI that couldsolve the kinds of problems that professional mathematicians spend careers chipping away at. Here are three ways to understand that gulf.  —Will Douglas HeavenThis story is from our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Inside the effort to tally AI’s energy appetite —James O’Donnell After working on it for months, my colleague Casey Crownhart and I finally saw our story on AI’s energy and emissions burden go live last week.  The initial goal sounded simple: Calculate how much energy is used when we interact with a chatbot, then tally that up to understand why leaders in tech and politics are so keen to harness unprecedented levels of electricity to power AI and reshape our energy grids in the process.It was, of course, not so simple. After speaking with dozens of researchers, we realized that the common understanding of AI’s energy appetite is full of holes. I encourage you to read the full story, which has some incredible graphics to help you understand this topic. But here are three takeaways I have after the project. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here, and check out the rest of our Power Hungry package about AI here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk has turned on Trump He called Trump’s domestic policy agenda a “disgusting abomination.”+ House Speaker Mike Johnson has, naturally, hit back. 2 NASA is in crisisIts budget has been cut by a quarter, and now its new leader has had his nomination revoked.+ What’s next for NASA’s giant moon rocket? 3 Here’s how Big Tech plans to wield AITo build ‘everything apps’ that keep you inside their ecosystem, forever.+ The trouble is, the experience isn’t always slick enough, as Google has discovered with its ‘Ask Photos’ feature.+ How to fight your instinct to blindly trust AI. 4 Meta has signed a 20-year deal to buy nuclear power It’s the latest in a race to try to keep up with AI’s surging energy demands.+ Can nuclear power really fuel the rise of AI?  5 Extreme heat takes a huge toll on people’s mental healthIt’s yet another issue we’re failing to prepare for, as summers get hotter and hotter.+ The quest to protect farmworkers from extreme heat. 6 China’s robotaxi companies are planning to expand in the Middle East And they’re getting a warmer welcome than in the US or Europe.+ China’s EV giants are also betting big on humanoid robots. 7 AI will supercharge hackersThe full impact of new AI techniques is yet to be felt, but experts say it’s only a matter of time.+ Five ways criminals are using AI. 8 It’s an exciting time to be working on Alzheimer’s treatments 12 of them are moving to the final phase of clinical trials this year.+ The innovation that gets an Alzheimer’s drug through the blood-brain barrier. 9 Workers are being subjected to more and more surveillanceNot just in the gig economy either—’bossware’ is increasingly appearing in offices too.10 Noughties nostalgia is rife on TikTokIt was a pretty fun decade, to be fair.Quote of the day  “This is scientific heaven. Or it used to be.” —Tom Rapoport, a 77-year-old Harvard Medical School professor from Germany, expresses his sadness about Trump’s cuts to US science funding to the New York Times.  One more thing OLCF What’s next for the world’s fastest supercomputers When the Frontier supercomputer came online in 2022, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillionfloating point operations a second.Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe.But speed itself isn’t the endgame. Researchers hope to pursue previously unanswerable questions about nature—and to design new technologies in areas from transportation to medicine. Read the full story. —Sophia Chen We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If tracking tube trains in London is your thing, you’ll love this live map.+ Take a truly bonkers trip down memory lane, courtesy of these FBI artifacts.+ Netflix’s Frankenstein looks pretty intense.+ Why landlines are so darn spooky #download #ais #role #math #calculating
    WWW.TECHNOLOGYREVIEW.COM
    The Download: AI’s role in math, and calculating its energy footprint
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. What’s next for AI and math The modern world is built on mathematics. Math lets us model complex systems such as the way air flows around an aircraft, the way financial markets fluctuate, and the way blood flows through the heart. Mathematicians have used computers for decades, but the new vision is that AI might help them crack problems that were previously uncrackable.   However, there’s a huge difference between AI that can solve the kinds of problems set in high school—math that the latest generation of models has already mastered—and AI that could (in theory) solve the kinds of problems that professional mathematicians spend careers chipping away at. Here are three ways to understand that gulf.  —Will Douglas HeavenThis story is from our What’s Next series, which looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Inside the effort to tally AI’s energy appetite —James O’Donnell After working on it for months, my colleague Casey Crownhart and I finally saw our story on AI’s energy and emissions burden go live last week.  The initial goal sounded simple: Calculate how much energy is used when we interact with a chatbot, then tally that up to understand why leaders in tech and politics are so keen to harness unprecedented levels of electricity to power AI and reshape our energy grids in the process.It was, of course, not so simple. After speaking with dozens of researchers, we realized that the common understanding of AI’s energy appetite is full of holes. I encourage you to read the full story, which has some incredible graphics to help you understand this topic. But here are three takeaways I have after the project. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here, and check out the rest of our Power Hungry package about AI here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk has turned on Trump He called Trump’s domestic policy agenda a “disgusting abomination.” (NYT $)+ House Speaker Mike Johnson has, naturally, hit back. (Insider $) 2 NASA is in crisisIts budget has been cut by a quarter, and now its new leader has had his nomination revoked. (New Scientist $)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review)3 Here’s how Big Tech plans to wield AITo build ‘everything apps’ that keep you inside their ecosystem, forever. (The Atlantic $)+ The trouble is, the experience isn’t always slick enough, as Google has discovered with its ‘Ask Photos’ feature. (The Verge $)+ How to fight your instinct to blindly trust AI. (WP $)4 Meta has signed a 20-year deal to buy nuclear power It’s the latest in a race to try to keep up with AI’s surging energy demands. (ABC)+ Can nuclear power really fuel the rise of AI? (MIT Technology Review) 5 Extreme heat takes a huge toll on people’s mental healthIt’s yet another issue we’re failing to prepare for, as summers get hotter and hotter. (Scientific American $)+ The quest to protect farmworkers from extreme heat. (MIT Technology Review) 6 China’s robotaxi companies are planning to expand in the Middle East And they’re getting a warmer welcome than in the US or Europe. (WSJ $)+ China’s EV giants are also betting big on humanoid robots. (MIT Technology Review)7 AI will supercharge hackersThe full impact of new AI techniques is yet to be felt, but experts say it’s only a matter of time. (Wired $)+ Five ways criminals are using AI. (MIT Technology Review)8 It’s an exciting time to be working on Alzheimer’s treatments 12 of them are moving to the final phase of clinical trials this year. (The Economist $)+ The innovation that gets an Alzheimer’s drug through the blood-brain barrier. (MIT Technology Review)9 Workers are being subjected to more and more surveillanceNot just in the gig economy either—’bossware’ is increasingly appearing in offices too. (Rest of World) 10 Noughties nostalgia is rife on TikTokIt was a pretty fun decade, to be fair. (The Guardian) Quote of the day  “This is scientific heaven. Or it used to be.” —Tom Rapoport, a 77-year-old Harvard Medical School professor from Germany, expresses his sadness about Trump’s cuts to US science funding to the New York Times.  One more thing OLCF What’s next for the world’s fastest supercomputers When the Frontier supercomputer came online in 2022, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second.Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe.But speed itself isn’t the endgame. Researchers hope to pursue previously unanswerable questions about nature—and to design new technologies in areas from transportation to medicine. Read the full story. —Sophia Chen We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If tracking tube trains in London is your thing, you’ll love this live map.+ Take a truly bonkers trip down memory lane, courtesy of these FBI artifacts.+ Netflix’s Frankenstein looks pretty intense.+ Why landlines are so darn spooky
    Like
    Love
    Wow
    Sad
    Angry
    227
    0 Commentarii 0 Distribuiri 0 previzualizare
  • MIT Technology Review Insiders Panel


    #mit #technology #review #insiders #panel
    MIT Technology Review Insiders Panel
    #mit #technology #review #insiders #panel
    Like
    Love
    Wow
    Sad
    Angry
    204
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: US climate studies are being shut down, and building cities from lava

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    The Trump administration has shut down more than 100 climate studies

    The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts.

    The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story.

    —James Temple

    This architect wants to build cities out of lava

    Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon

    This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy.+ Humanoid robots is an area of particular interest.+ China has accused the US of violating the pair’s trade truce.2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk.+ DOGE’s tech takeover threatens the safety and stability of our critical data.3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint.4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out.+ This annual shot might protect against HIV infections.5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory.+ How we use AI in the future is up to us.+ This benchmark used Reddit’s AITA to test how much AI models suck up to us.6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst.7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable.+ Yet another lawyer has been found referencing nonexistent citations.+ How AI is introducing errors into courtrooms.8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes.+ Watch this robot cook shrimp and clean autonomously.9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end.10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week.Quote of the day

    “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.”

    —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters.

    One more thing

    ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story. 

    —Antonio Regalado

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues.
    #download #climate #studies #are #being
    The Download: US climate studies are being shut down, and building cities from lava
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Trump administration has shut down more than 100 climate studies The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts. The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story. —James Temple This architect wants to build cities out of lava Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy.+ Humanoid robots is an area of particular interest.+ China has accused the US of violating the pair’s trade truce.2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk.+ DOGE’s tech takeover threatens the safety and stability of our critical data.3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint.4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out.+ This annual shot might protect against HIV infections.5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory.+ How we use AI in the future is up to us.+ This benchmark used Reddit’s AITA to test how much AI models suck up to us.6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst.7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable.+ Yet another lawyer has been found referencing nonexistent citations.+ How AI is introducing errors into courtrooms.8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes.+ Watch this robot cook shrimp and clean autonomously.9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end.10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week.Quote of the day “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.” —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters. One more thing ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story.  —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues. #download #climate #studies #are #being
    WWW.TECHNOLOGYREVIEW.COM
    The Download: US climate studies are being shut down, and building cities from lava
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Trump administration has shut down more than 100 climate studies The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts. The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story. —James Temple This architect wants to build cities out of lava Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy. (WSJ $)+ Humanoid robots is an area of particular interest. (Bloomberg $)+ China has accused the US of violating the pair’s trade truce. (FT $) 2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk. (Wired $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint. (Rest of World) 4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out. (The Atlantic $)+ This annual shot might protect against HIV infections. (MIT Technology Review) 5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory. (WP $)+ How we use AI in the future is up to us. (New Yorker $)+ This benchmark used Reddit’s AITA to test how much AI models suck up to us. (MIT Technology Review) 6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst. (The Guardian) 7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable. (The Verge)+ Yet another lawyer has been found referencing nonexistent citations. (The Guardian)+ How AI is introducing errors into courtrooms. (MIT Technology Review) 8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes. (NYT $)+ Watch this robot cook shrimp and clean autonomously. (MIT Technology Review) 9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end. (NBC News) 10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week. (Bloomberg $) Quote of the day “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.” —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters. One more thing ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story.  —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • This startup wants to make more climate-friendly metal in the US

    A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions.

    Magnesium is an incredibly light metal, and it’s used for parts in cars and planes, as well as in aluminum alloys like those in vehicles. The metal is also used in defense and industrial applications, including the production processes for steel and titanium.

    Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing.

    The star of Magrathea’s process is an electrolyzer, a device that uses electricity to split a material into its constituent elements. Using an electrolyzer in magnesium production isn’t new, but Magrathea’s approach represents an update. “We really modernized it and brought it into the 21st century,” says Alex Grant, Magrathea’s cofounder and CEO.

    The whole process starts with salty water. There are small amounts of magnesium in seawater, as well as in salt lakes and groundwater.If you take that seawater or brine and clean it up, concentrate it, and dry it out, you get a solid magnesium chloride salt.

    Magrathea takes that saltand puts it into the electrolyzer. The device reaches temperatures of about 700 °Cand runs electricity through the molten salt to split the magnesium from the chlorine, forming magnesium metal.

    Typically, running an electrolyzer in this process would require a steady source of electricity. The temperature is generally kept just high enough to maintain the salt in a molten state. Allowing it to cool down too much would allow it to solidify, messing up the process and potentially damaging the equipment. Heating it up more than necessary would just waste energy. 

    Magrathea’s approach builds in flexibility. Basically, the company runs its electrolyzer about 100 °C higher than is necessary to keep the molten salt a liquid. It then uses the extra heat in inventive ways, including to dry out the magnesium salt that eventually goes into the reactor. This preparation can be done intermittently, so the company can take in electricity when it’s cheaper or when more renewables are available, cutting costs and emissions. In addition, the process will make a co-product, called magnesium oxide, that can be used to trap carbon dioxide from the atmosphere, helping to cancel out the remaining carbon pollution.

    The result could be a production process with net-zero emissions, according to an independent life cycle assessment completed in January. While it likely won’t reach this bar at first, the potential is there for a much more climate-friendly process than what’s used in the industry today, Grant says.

    Breaking into magnesium production won’t be simple, says Simon Jowitt, director of the Nevada Bureau of Mines and of the Center for Research in Economic Geology at the University of Nevada, Reno.

    China produces roughly 95% of the global supply as of 2024, according to data from the US Geological Survey. This dominant position means companies there can flood the market with cheap metal, making it difficult for others to compete. “The economics of all this is uncertain,” Jowitt says.

    The US has some trade protections in place, including an anti-dumping duty, but newer players with alternative processes can still face obstacles. US Magnesium, a company based in Utah, was the only company making magnesium in the US in recent years, but it shut down production in 2022 after equipment failures and a history of environmental concerns. 

    Magrathea plans to start building a demonstration plant in Utah in late 2025 or early 2026, which will have a capacity of roughly 1,000 tons per year and should be running in 2027. In February the company announced that it signed an agreement with a major automaker, though it declined to share its name on the record. The automaker pre-purchased material from the demonstration plant and will incorporate it into existing products.

    After the demonstration plant is running, the next step would be to build a commercial plant with a larger capacity of around 50,000 tons annually.
    #this #startup #wants #make #more
    This startup wants to make more climate-friendly metal in the US
    A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Magnesium is an incredibly light metal, and it’s used for parts in cars and planes, as well as in aluminum alloys like those in vehicles. The metal is also used in defense and industrial applications, including the production processes for steel and titanium. Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. The star of Magrathea’s process is an electrolyzer, a device that uses electricity to split a material into its constituent elements. Using an electrolyzer in magnesium production isn’t new, but Magrathea’s approach represents an update. “We really modernized it and brought it into the 21st century,” says Alex Grant, Magrathea’s cofounder and CEO. The whole process starts with salty water. There are small amounts of magnesium in seawater, as well as in salt lakes and groundwater.If you take that seawater or brine and clean it up, concentrate it, and dry it out, you get a solid magnesium chloride salt. Magrathea takes that saltand puts it into the electrolyzer. The device reaches temperatures of about 700 °Cand runs electricity through the molten salt to split the magnesium from the chlorine, forming magnesium metal. Typically, running an electrolyzer in this process would require a steady source of electricity. The temperature is generally kept just high enough to maintain the salt in a molten state. Allowing it to cool down too much would allow it to solidify, messing up the process and potentially damaging the equipment. Heating it up more than necessary would just waste energy.  Magrathea’s approach builds in flexibility. Basically, the company runs its electrolyzer about 100 °C higher than is necessary to keep the molten salt a liquid. It then uses the extra heat in inventive ways, including to dry out the magnesium salt that eventually goes into the reactor. This preparation can be done intermittently, so the company can take in electricity when it’s cheaper or when more renewables are available, cutting costs and emissions. In addition, the process will make a co-product, called magnesium oxide, that can be used to trap carbon dioxide from the atmosphere, helping to cancel out the remaining carbon pollution. The result could be a production process with net-zero emissions, according to an independent life cycle assessment completed in January. While it likely won’t reach this bar at first, the potential is there for a much more climate-friendly process than what’s used in the industry today, Grant says. Breaking into magnesium production won’t be simple, says Simon Jowitt, director of the Nevada Bureau of Mines and of the Center for Research in Economic Geology at the University of Nevada, Reno. China produces roughly 95% of the global supply as of 2024, according to data from the US Geological Survey. This dominant position means companies there can flood the market with cheap metal, making it difficult for others to compete. “The economics of all this is uncertain,” Jowitt says. The US has some trade protections in place, including an anti-dumping duty, but newer players with alternative processes can still face obstacles. US Magnesium, a company based in Utah, was the only company making magnesium in the US in recent years, but it shut down production in 2022 after equipment failures and a history of environmental concerns.  Magrathea plans to start building a demonstration plant in Utah in late 2025 or early 2026, which will have a capacity of roughly 1,000 tons per year and should be running in 2027. In February the company announced that it signed an agreement with a major automaker, though it declined to share its name on the record. The automaker pre-purchased material from the demonstration plant and will incorporate it into existing products. After the demonstration plant is running, the next step would be to build a commercial plant with a larger capacity of around 50,000 tons annually. #this #startup #wants #make #more
    WWW.TECHNOLOGYREVIEW.COM
    This startup wants to make more climate-friendly metal in the US
    A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Magnesium is an incredibly light metal, and it’s used for parts in cars and planes, as well as in aluminum alloys like those in vehicles. The metal is also used in defense and industrial applications, including the production processes for steel and titanium. Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. The star of Magrathea’s process is an electrolyzer, a device that uses electricity to split a material into its constituent elements. Using an electrolyzer in magnesium production isn’t new, but Magrathea’s approach represents an update. “We really modernized it and brought it into the 21st century,” says Alex Grant, Magrathea’s cofounder and CEO. The whole process starts with salty water. There are small amounts of magnesium in seawater, as well as in salt lakes and groundwater. (In seawater, the concentration is about 1,300 parts per million, so magnesium makes up about 0.1% of seawater by weight.) If you take that seawater or brine and clean it up, concentrate it, and dry it out, you get a solid magnesium chloride salt. Magrathea takes that salt (which it currently buys from Cargill) and puts it into the electrolyzer. The device reaches temperatures of about 700 °C (almost 1,300 °F) and runs electricity through the molten salt to split the magnesium from the chlorine, forming magnesium metal. Typically, running an electrolyzer in this process would require a steady source of electricity. The temperature is generally kept just high enough to maintain the salt in a molten state. Allowing it to cool down too much would allow it to solidify, messing up the process and potentially damaging the equipment. Heating it up more than necessary would just waste energy.  Magrathea’s approach builds in flexibility. Basically, the company runs its electrolyzer about 100 °C higher than is necessary to keep the molten salt a liquid. It then uses the extra heat in inventive ways, including to dry out the magnesium salt that eventually goes into the reactor. This preparation can be done intermittently, so the company can take in electricity when it’s cheaper or when more renewables are available, cutting costs and emissions. In addition, the process will make a co-product, called magnesium oxide, that can be used to trap carbon dioxide from the atmosphere, helping to cancel out the remaining carbon pollution. The result could be a production process with net-zero emissions, according to an independent life cycle assessment completed in January. While it likely won’t reach this bar at first, the potential is there for a much more climate-friendly process than what’s used in the industry today, Grant says. Breaking into magnesium production won’t be simple, says Simon Jowitt, director of the Nevada Bureau of Mines and of the Center for Research in Economic Geology at the University of Nevada, Reno. China produces roughly 95% of the global supply as of 2024, according to data from the US Geological Survey. This dominant position means companies there can flood the market with cheap metal, making it difficult for others to compete. “The economics of all this is uncertain,” Jowitt says. The US has some trade protections in place, including an anti-dumping duty, but newer players with alternative processes can still face obstacles. US Magnesium, a company based in Utah, was the only company making magnesium in the US in recent years, but it shut down production in 2022 after equipment failures and a history of environmental concerns.  Magrathea plans to start building a demonstration plant in Utah in late 2025 or early 2026, which will have a capacity of roughly 1,000 tons per year and should be running in 2027. In February the company announced that it signed an agreement with a major automaker, though it declined to share its name on the record. The automaker pre-purchased material from the demonstration plant and will incorporate it into existing products. After the demonstration plant is running, the next step would be to build a commercial plant with a larger capacity of around 50,000 tons annually.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • OpenAI: The power and the pride

    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question. 

    “If you hadetched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.” 

    There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. 

    In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. 

    Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work. 

    With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes. 

    The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells itis very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” toillegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.  

    The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes.

    “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.” 

    To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers. 

    She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it. 

    Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project …shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.” 

    Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.” 

    Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam. Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics. 

    Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.
    #openai #power #pride
    OpenAI: The power and the pride
    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question.  “If you hadetched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.”  There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution.  In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others.  Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work.  With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes.  The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells itis very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” toillegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.   The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes. “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”  To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers.  She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it.  Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project …shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.”  Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.”  Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam. Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics.  Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square. #openai #power #pride
    WWW.TECHNOLOGYREVIEW.COM
    OpenAI: The power and the pride
    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question.  “If you had [GPT-4’s model weights] etched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.”  There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution.  In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others.  Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work.  With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes.  The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells it (and as Hagey does too) is very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” to (at least in the eyes of the courts) illegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.   The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes. “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”  To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers.  She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it.  Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project … [The New Zealand model] shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.”  Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.”  Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam (something he and the rest of the Altman family vehemently deny). Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics.  Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: the story of OpenAI, and making magnesium

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    OpenAI: The power and the pride

    OpenAI’s release of ChatGPT 3.5 set in motion an AI arms race that has changed the world.

    How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it.In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. 

    Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. Read the full review.—Mat Honan

    This startup wants to make more climate-friendly metal in the US

    The news: A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions.

    Why it matters: Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. Read the full story.

    —Casey Crownhart

    A new sodium metal fuel cell could help clean up transportation

    A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. The sodium-air fuel cell has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. Read the full story.

    —Casey Crownhart

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 The US state department is considering vetting foreign students’ social mediaAfter ordering US embassies to suspend international students’ visa appointments.+ Applicants’ posts, shares and comments could be assessed.+ The Trump administration also wants to cut off Harvard’s funding.2 SpaceX’s rocket exploded during its test flight It’s the third consecutive explosion the company has suffered this year.+ It was the first significant attempt to reuse Starship hardware.+ Elon Musk is fairly confident the problem with the engine bay has been resolved.3 The age of AI layoffs is hereAnd it’s taking place in conference rooms, not on factory floors.+ People are worried that AI will take everyone’s jobs. We’ve been here before.4 Thousands of IVF embryos in Gaza were destroyed by Israeli strikesAn attack destroyed the fertility clinic where they were housed.+ Inside the strange limbo facing millions of IVF embryos.5 China’s overall greenhouse gas emissions have fallen for the first timeEven as energy demand has risen.+ China’s complicated role in climate change.6 The sun is damaging Starlink’s satellitesIts eruptions are reducing the satellite’s lifespans.+ Apple’s satellite connectivity dreams are being thwarted by Musk.7 European companies are struggling to do business in ChinaEven the ones that have operated there for decades.+ The country’s economic slowdown is making things tough.8 US hospitals are embracing helpful robotsThey’re delivering medications and supplies so nurses don’t have to.+ Will we ever trust robots?9 Meet the people who write the text messages on your favorite show They try to make messages as realistic, and intriguing, as possible.10 Robot dogs are delivering parcels in AustinWell, over 100 yard distances at least.Quote of the day

    “I wouldn’t say there’s hope. I wouldn’t bet on that.”

    —Michael Roll, a partner at law firm Roll & Harris, explains to Wired why businesses shouldn’t get their hopes up over obtaining refunds for Donald Trump’s tariff price hikes.

    One more thing

    Is the digital dollar dead?In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.How things change. The digital dollar—even though it doesn’t exist—has now become political red meat, as some politicians label it a dystopian tool for surveillance. So is the dream of the digital dollar dead? Read the full story.

    —Mike Orcutt

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ Recently returned from vacation? Here’s how to cope with coming back to reality.+ Reconnecting with friends is one of life’s great joys.+ A new Parisian cocktail bar has done away with ice entirely in a bid to be more sustainable.+ Why being bored is good for you—no, really.
    #download #story #openai #making #magnesium
    The Download: the story of OpenAI, and making magnesium
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI: The power and the pride OpenAI’s release of ChatGPT 3.5 set in motion an AI arms race that has changed the world. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it.In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI.  Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. Read the full review.—Mat Honan This startup wants to make more climate-friendly metal in the US The news: A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Why it matters: Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. Read the full story. —Casey Crownhart A new sodium metal fuel cell could help clean up transportation A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. The sodium-air fuel cell has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. Read the full story. —Casey Crownhart The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US state department is considering vetting foreign students’ social mediaAfter ordering US embassies to suspend international students’ visa appointments.+ Applicants’ posts, shares and comments could be assessed.+ The Trump administration also wants to cut off Harvard’s funding.2 SpaceX’s rocket exploded during its test flight It’s the third consecutive explosion the company has suffered this year.+ It was the first significant attempt to reuse Starship hardware.+ Elon Musk is fairly confident the problem with the engine bay has been resolved.3 The age of AI layoffs is hereAnd it’s taking place in conference rooms, not on factory floors.+ People are worried that AI will take everyone’s jobs. We’ve been here before.4 Thousands of IVF embryos in Gaza were destroyed by Israeli strikesAn attack destroyed the fertility clinic where they were housed.+ Inside the strange limbo facing millions of IVF embryos.5 China’s overall greenhouse gas emissions have fallen for the first timeEven as energy demand has risen.+ China’s complicated role in climate change.6 The sun is damaging Starlink’s satellitesIts eruptions are reducing the satellite’s lifespans.+ Apple’s satellite connectivity dreams are being thwarted by Musk.7 European companies are struggling to do business in ChinaEven the ones that have operated there for decades.+ The country’s economic slowdown is making things tough.8 US hospitals are embracing helpful robotsThey’re delivering medications and supplies so nurses don’t have to.+ Will we ever trust robots?9 Meet the people who write the text messages on your favorite show They try to make messages as realistic, and intriguing, as possible.10 Robot dogs are delivering parcels in AustinWell, over 100 yard distances at least.Quote of the day “I wouldn’t say there’s hope. I wouldn’t bet on that.” —Michael Roll, a partner at law firm Roll & Harris, explains to Wired why businesses shouldn’t get their hopes up over obtaining refunds for Donald Trump’s tariff price hikes. One more thing Is the digital dollar dead?In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.How things change. The digital dollar—even though it doesn’t exist—has now become political red meat, as some politicians label it a dystopian tool for surveillance. So is the dream of the digital dollar dead? Read the full story. —Mike Orcutt We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Recently returned from vacation? Here’s how to cope with coming back to reality.+ Reconnecting with friends is one of life’s great joys.+ A new Parisian cocktail bar has done away with ice entirely in a bid to be more sustainable.+ Why being bored is good for you—no, really. #download #story #openai #making #magnesium
    WWW.TECHNOLOGYREVIEW.COM
    The Download: the story of OpenAI, and making magnesium
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI: The power and the pride OpenAI’s release of ChatGPT 3.5 set in motion an AI arms race that has changed the world. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it.In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI.  Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. Read the full review.—Mat Honan This startup wants to make more climate-friendly metal in the US The news: A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Why it matters: Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. Read the full story. —Casey Crownhart A new sodium metal fuel cell could help clean up transportation A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. The sodium-air fuel cell has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. Read the full story. —Casey Crownhart The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US state department is considering vetting foreign students’ social mediaAfter ordering US embassies to suspend international students’ visa appointments. (Politico)+ Applicants’ posts, shares and comments could be assessed. (The Guardian)+ The Trump administration also wants to cut off Harvard’s funding. (NYT $) 2 SpaceX’s rocket exploded during its test flight It’s the third consecutive explosion the company has suffered this year. (CNBC)+ It was the first significant attempt to reuse Starship hardware. (Space)+ Elon Musk is fairly confident the problem with the engine bay has been resolved. (Ars Technica)3 The age of AI layoffs is hereAnd it’s taking place in conference rooms, not on factory floors. (Quartz)+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)4 Thousands of IVF embryos in Gaza were destroyed by Israeli strikesAn attack destroyed the fertility clinic where they were housed. (BBC)+ Inside the strange limbo facing millions of IVF embryos. (MIT Technology Review) 5 China’s overall greenhouse gas emissions have fallen for the first timeEven as energy demand has risen. (Vox)+ China’s complicated role in climate change. (MIT Technology Review) 6 The sun is damaging Starlink’s satellitesIts eruptions are reducing the satellite’s lifespans. (New Scientist $)+ Apple’s satellite connectivity dreams are being thwarted by Musk. (The Information $) 7 European companies are struggling to do business in ChinaEven the ones that have operated there for decades. (NYT $)+ The country’s economic slowdown is making things tough. (Bloomberg $) 8 US hospitals are embracing helpful robotsThey’re delivering medications and supplies so nurses don’t have to. (FT $)+ Will we ever trust robots? (MIT Technology Review) 9 Meet the people who write the text messages on your favorite show They try to make messages as realistic, and intriguing, as possible. (The Guardian) 10 Robot dogs are delivering parcels in AustinWell, over 100 yard distances at least. (TechCrunch) Quote of the day “I wouldn’t say there’s hope. I wouldn’t bet on that.” —Michael Roll, a partner at law firm Roll & Harris, explains to Wired why businesses shouldn’t get their hopes up over obtaining refunds for Donald Trump’s tariff price hikes. One more thing Is the digital dollar dead?In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.How things change. The digital dollar—even though it doesn’t exist—has now become political red meat, as some politicians label it a dystopian tool for surveillance. So is the dream of the digital dollar dead? Read the full story. —Mike Orcutt We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Recently returned from vacation? Here’s how to cope with coming back to reality.+ Reconnecting with friends is one of life’s great joys.+ A new Parisian cocktail bar has done away with ice entirely in a bid to be more sustainable.+ Why being bored is good for you—no, really.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The AI Hype Index: College students are hooked on ChatGPT

    Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

    Large language models confidently present their responses as accurate and reliable, even when they’re neither of those things. That’s why we’ve recently seen chatbots supercharge vulnerable people’s delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, andrant irrationally about “white genocide.”

    But it’s not all bad news—AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind’s new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak.
    #hype #index #college #students #are
    The AI Hype Index: College students are hooked on ChatGPT
    Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Large language models confidently present their responses as accurate and reliable, even when they’re neither of those things. That’s why we’ve recently seen chatbots supercharge vulnerable people’s delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, andrant irrationally about “white genocide.” But it’s not all bad news—AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind’s new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak. #hype #index #college #students #are
    WWW.TECHNOLOGYREVIEW.COM
    The AI Hype Index: College students are hooked on ChatGPT
    Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Large language models confidently present their responses as accurate and reliable, even when they’re neither of those things. That’s why we’ve recently seen chatbots supercharge vulnerable people’s delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, and (in the case of xAI’s Grok) rant irrationally about “white genocide.” But it’s not all bad news—AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind’s new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • This giant microwave may change the future of war

    Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back. 

    Maybe it sounds like a new Michael Bay movie, but it’s the scenario that keeps the chief technology officer of the US Army up at night.

    “I’m hesitant to say it out loud so I don’t manifest it,” says Alex Miller, a longtime Army intelligence official who became the CTO to the Army’s chief of staff in 2023.

    Even if World War III doesn’t break out in the South China Sea, every US military installation around the world is vulnerable to the same tactics—as are the militaries of every other country around the world. The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required. 

    While the US has precision missiles that can shoot these drones down, they don’t always succeed: A drone attack killed three US soldiers and injured dozens more at a base in the Jordanian desert last year. And each American missile costs orders of magnitude more than its targets, which limits their supply; countering thousand-dollar drones with missiles that cost hundreds of thousands, or even millions, of dollars per shot can only work for so long, even with a defense budget that could reach a trillion dollars next year.

    The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. There are drones that slam into other drones like battering rams; drones that shoot out nets to ensnare quadcopter propellers; precision-guided Gatling guns that simply shoot drones out of the sky; electronic approaches, like GPS jammers and direct hacking tools; and lasers that melt holes clear through a target’s side.

    Then there are the microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. 

    That’s where Epirus comes in. 

    When I went to visit the HQ of this 185-person startup in Torrance, California, earlier this year, I got a behind-the-scenes look at its massive microwave, called Leonidas, which the US Army is already betting on as a cutting-edge anti-drone weapon. The Army awarded Epirus a million contract in early 2023, topped that up with another million last fall, and is currently deploying a handful of the systems for testing with US troops in the Middle East and the Pacific. 

    Up close, the Leonidas that Epirus built for the Army looks like a two-foot-thick slab of metal the size of a garage door stuck on a swivel mount. Pop the back cover, and you can see that the slab is filled with dozens of individual microwave amplifier units in a grid. Each is about the size of a safe-deposit box and built around a chip made of gallium nitride, a semiconductor that can survive much higher voltages and temperatures than the typical silicon. 

    Leonidas sits on top of a trailer that a standard-issue Army truck can tow, and when it is powered on, the company’s software tells the grid of amps and antennas to shape the electromagnetic waves they’re blasting out with a phased array, precisely overlapping the microwave signals to mold the energy into a focused beam. Instead of needing to physically point a gun or parabolic dish at each of a thousand incoming drones, the Leonidas can flick between them at the speed of software.

    The Leonidas contains dozens of microwave amplifier units and can pivot to direct waves at incoming swarms of drones.EPIRUS

    Of course, this isn’t magic—there are practical limits on how much damage one array can do, and at what range—but the total effect could be described as an electromagnetic pulse emitter, a death ray for electronics, or a force field that could set up a protective barrier around military installations and drop drones the way a bug zapper fizzles a mob of mosquitoes.

    I walked through the nonclassified sections of the Leonidas factory floor, where a cluster of engineers working on weaponeering—the military term for figuring out exactly how much of a weapon, be it high explosive or microwave beam, is necessary to achieve a desired effect—ran tests in a warren of smaller anechoic rooms. Inside, they shot individual microwave units at a broad range of commercial and military drones, cycling through waveforms and power levels to try to find the signal that could fry each one with maximum efficiency. 

    On a live video feed from inside one of these foam-padded rooms, I watched a quadcopter drone spin its propellers and then, once the microwave emitter turned on, instantly stop short—first the propeller on the front left and then the rest. A drone hit with a Leonidas beam doesn’t explode—it just falls.

    Compared with the blast of a missile or the sizzle of a laser, it doesn’t look like much. But it could force enemies to come up with costlier ways of attacking that reduce the advantage of the drone swarm, and it could get around the inherent limitations of purely electronic or strictly physical defense systems. It could save lives.

    Epirus CEO Andy Lowery, a tall guy with sparkplug energy and a rapid-fire southern Illinois twang, doesn’t shy away from talking big about his product. As he told me during my visit, Leonidas is intended to lead a last stand, like the Spartan from whom the microwave takes its name—in this case, against hordes of unmanned aerial vehicles, or UAVs. While the actual range of the Leonidas system is kept secret, Lowery says the Army is looking for a solution that can reliably stop drones within a few kilometers. He told me, “They would like our system to be the owner of that final layer—to get any squeakers, any leakers, anything like that.”

    Now that they’ve told the world they “invented a force field,” Lowery added, the focus is on manufacturing at scale—before the drone swarms really start to descend or a nation with a major military decides to launch a new war. Before, in other words, Miller’s nightmare scenario becomes reality. 

    Why zap?

    Miller remembers well when the danger of small weaponized drones first appeared on his radar. Reports of Islamic State fighters strapping grenades to the bottom of commercial DJI Phantom quadcopters first emerged in late 2016 during the Battle of Mosul. “I went, ‘Oh, this is going to be bad,’ because basically it’s an airborne IED at that point,” he says.

    He’s tracked the danger as it’s built steadily since then, with advances in machine vision, AI coordination software, and suicide drone tactics only accelerating. 

    Then the war in Ukraine showed the world that cheap technology has fundamentally changed how warfare happens. We have watched in high-definition video how a cheap, off-the-shelf drone modified to carry a small bomb can be piloted directly into a faraway truck, tank, or group of troops to devastating effect. And larger suicide drones, also known as “loitering munitions,” can be produced for just tens of thousands of dollars and launched in massive salvos to hit soft targets or overwhelm more advanced military defenses through sheer numbers. 

    As a result, Miller, along with large swaths of the Pentagon and DC policy circles, believes that the current US arsenal for defending against these weapons is just too expensive and the tools in too short supply to truly match the threat.

    Just look at Yemen, a poor country where the Houthi military group has been under constant attack for the past decade. Armed with this new low-tech arsenal, in the past 18 months the rebel group has been able to bomb cargo ships and effectively disrupt global shipping in the Red Sea—part of an effort to apply pressure on Israel to stop its war in Gaza. The Houthis have also used missiles, suicide drones, and even drone boats to launch powerful attacks on US Navy ships sent to stop them.

    The most successful defense tech firm selling anti-drone weapons to the US military right now is Anduril, the company started by Palmer Luckey, the inventor of the Oculus VR headset, and a crew of cofounders from Oculus and defense data giant Palantir. In just the past few months, the Marines have chosen Anduril for counter-drone contracts that could be worth nearly million over the next decade, and the company has been working with Special Operations Command since 2022 on a counter-drone contract that could be worth nearly a billion dollars over a similar time frame. It’s unclear from the contracts what, exactly, Anduril is selling to each organization, but its weapons include electronic warfare jammers, jet-powered drone bombs, and propeller-driven Anvil drones designed to simply smash into enemy drones.

    In this arsenal, the cheapest way to stop a swarm of drones is electronic warfare: jamming the GPS or radio signals used to pilot the machines. But the intense drone battles in Ukraine have advanced the art of jamming and counter-jamming close to the point of stalemate. As a result, a new state of the art is emerging: unjammable drones that operate autonomously by using onboard processors to navigate via internal maps and computer vision, or even drones connected with 20-kilometer-long filaments of fiber-optic cable for tethered control.

    But unjammable doesn’t mean unzappable. Instead of using the scrambling method of a jammer, which employs an antenna to block the drone’s connection to a pilot or remote guidance system, the Leonidas microwave beam hits a drone body broadside. The energy finds its way into something electrical, whether the central flight controller or a tiny wire controlling a flap on a wing, to short-circuit whatever’s available.Tyler Miller, a senior systems engineer on Epirus’s weaponeering team, told me that they never know exactly which part of the target drone is going to go down first, but they’ve reliably seen the microwave signal get in somewhere to overload a circuit. “Based on the geometry and the way the wires are laid out,” he said, one of those wires is going to be the best path in. “Sometimes if we rotate the drone 90 degrees, you have a different motor go down first,” he added.

    The team has even tried wrapping target drones in copper tape, which would theoretically provide shielding, only to find that the microwave still finds a way in through moving propeller shafts or antennas that need to remain exposed for the drone to fly. 

    EPIRUS

    Leonidas also has an edge when it comes to downing a mass of drones at once. Physically hitting a drone out of the sky or lighting it up with a laser can be effective in situations where electronic warfare fails, but anti-drone drones can only take out one at a time, and lasers need to precisely aim and shoot. Epirus’s microwaves can damage everything in a roughly 60-degree arc from the Leonidas emitter simultaneously and keep on zapping and zapping; directed energy systems like this one never run out of ammo.

    As for cost, each Army Leonidas unit currently runs in the “low eight figures,” Lowery told me. Defense contract pricing can be opaque, but Epirus delivered four units for its million initial contract, giving a back-of-napkin price around million each. For comparison, Stinger missiles from Raytheon, which soldiers shoot at enemy aircraft or drones from a shoulder-mounted launcher, cost hundreds of thousands of dollars a pop, meaning the Leonidas could start costing lessafter it downs the first wave of a swarm.

    Raytheon’s radar, reversed

    Epirus is part of a new wave of venture-capital-backed defense companies trying to change the way weapons are created—and the way the Pentagon buys them. The largest defense companies, firms like Raytheon, Boeing, Northrop Grumman, and Lockheed Martin, typically develop new weapons in response to research grants and cost-plus contracts, in which the US Department of Defense guarantees a certain profit margin to firms building products that match their laundry list of technical specifications. These programs have kept the military supplied with cutting-edge weapons for decades, but the results may be exquisite pieces of military machinery delivered years late and billions of dollars over budget.

    Rather than building to minutely detailed specs, the new crop of military contractors aim to produce products on a quick time frame to solve a problem and then fine-tune them as they pitch to the military. The model, pioneered by Palantir and SpaceX, has since propelled companies like Anduril, Shield AI, and dozens of other smaller startups into the business of war as venture capital piles tens of billions of dollars into defense.

    Like Anduril, Epirus has direct Palantir roots; it was cofounded by Joe Lonsdale, who also cofounded Palantir, and John Tenet, Lonsdale’s colleague at the time at his venture fund, 8VC. 

    While Epirus is doing business in the new mode, its roots are in the old—specifically in Raytheon, a pioneer in the field of microwave technology. Cofounded by MIT professor Vannevar Bush in 1922, it manufactured vacuum tubes, like those found in old radios. But the company became synonymous with electronic defense during World War II, when Bush spun up a lab to develop early microwave radar technology invented by the British into a workable product, and Raytheon then began mass-producing microwave tubes—known as magnetrons—for the US war effort. By the end of the war in 1945, Raytheon was making 80% of the magnetrons powering Allied radar across the world.

    From padded foam chambers at the Epirus HQ, Leonidas devices can be safely tested on drones.EPIRUS

    Large tubes remained the best way to emit high-power microwaves for more than half a century, handily outperforming silicon-based solid-state amplifiers. They’re still around—the microwave on your kitchen counter runs on a vacuum tube magnetron. But tubes have downsides: They’re hot, they’re big, and they require upkeep.By the 2000s, new methods of building solid-state amplifiers out of materials like gallium nitride started to mature and were able to handle more power than silicon without melting or shorting out. The US Navy spent hundreds of millions of dollars on cutting-edge microwave contracts, one for a project at Raytheon called Next Generation Jammer—geared specifically toward designing a new way to make high-powered microwaves that work at extremely long distances.

    Lowery, the Epirus CEO, began his career working on nuclear reactors on Navy aircraft carriers before he became the chief engineer for Next Generation Jammer at Raytheon in 2010. There, he and his team worked on a system that relied on many of the same fundamentals that now power the Leonidas—using the same type of amplifier material and antenna setup to fry the electronics of a small target at much closer range rather than disrupting the radar of a target hundreds of miles away. 

    The similarity is not a coincidence: Two engineers from Next Generation Jammer helped launch Epirus in 2018. Lowery—who by then was working at the augmented-reality startup RealWear, which makes industrial smart glasses—joined Epirus in 2021 to run product development and was asked to take the top spot as CEO in 2023, as Leonidas became a fully formed machine. Much of the founding team has since departed for other projects, but Raytheon still runs through the company’s collective CV: ex-Raytheon radar engineer Matt Markel started in January as the new CTO, and Epirus’s chief engineer for defense, its VP of engineering, its VP of operations, and a number of employees all have Raytheon roots as well.

    Markel tells me that the Epirus way of working wouldn’t have flown at one of the big defense contractors: “They never would have tried spinning off the technology into a new application without a contract lined up.” The Epirus engineers saw the use case, raised money to start building Leonidas, and already had prototypes in the works before any military branch started awarding money to work on the project.

    Waiting for the starting gun

    On the wall of Lowery’s office are two mementos from testing days at an Army proving ground: a trophy wing from a larger drone, signed by the whole testing team, and a framed photo documenting the Leonidas’s carnage—a stack of dozens of inoperative drones piled up in a heap. 

    Despite what seems to have been an impressive test show, it’s still impossible from the outside to determine whether Epirus’s tech is ready to fully deliver if the swarms descend. 

    The Army would not comment specifically on the efficacy of any new weapons in testing or early deployment, including the Leonidas system. A spokesperson for the Army’s Rapid Capabilities and Critical Technologies Office, or RCCTO, which is the subsection responsible for contracting with Epirus to date, would only say in a statement that it is “committed to developing and fielding innovative Directed Energy solutions to address evolving threats.” 

    But various high-ranking officers appear to be giving Epirus a public vote of confidence. The three-star general who runs RCCTO and oversaw the Leonidas testing last summer told Breaking Defense that “the system actually worked very well,” even if there was work to be done on “how the weapon system fits into the larger kill chain.”

    And when former secretary of the Army Christine Wormuth, then the service’s highest-ranking civilian, gave a parting interview this past January, she mentioned Epirus in all but name, citing “one company” that is “using high-powered microwaves to basically be able to kill swarms of drones.” She called that kind of capability “critical for the Army.” 

    The Army isn’t the only branch interested in the microwave weapon. On Epirus’s factory floor when I visited, alongside the big beige Leonidases commissioned by the Army, engineers were building a smaller expeditionary version for the Marines, painted green, which it delivered in late April. Videos show that when it put some of its microwave emitters on a dock and tested them out for the Navy last summer, the microwaves left their targets dead in the water—successfully frying the circuits of outboard motors like the ones propelling Houthi drone boats. 

    Epirus is also currently working on an even smaller version of the Leonidas that can mount on top of the Army’s Stryker combat vehicles, and it’s testing out attaching a single microwave unit to a small airborne drone, which could work as a highly focused zapper to disable cars, data centers, or single enemy drones. 

    Epirus’s microwave technology is also being tested in devices smaller than the traditional Leonidas. EPIRUS

    While neither the Army nor the Navy has yet to announce a contract to start buying Epirus’s systems at scale, the company and its investors are actively preparing for the big orders to start rolling in. It raised million in a funding round in early March to get ready to make as many Leonidases as possible in the coming years, adding to the more than million it’s raised since opening its doors in 2018.

    “If you invent a force field that works,” Lowery boasts, “you really get a lot of attention.”

    The task for Epirus now, assuming that its main customers pull the trigger and start buying more Leonidases, is ramping up production while advancing the tech in its systems. Then there are the more prosaic problems of staffing, assembly, and testing at scale. For future generations, Lowery told me, the goal is refining the antenna design and integrating higher-powered microwave amplifiers to push the output into the tens of kilowatts, allowing for increased range and efficacy. 

    While this could be made harder by Trump’s global trade war, Lowery says he’s not worried about their supply chain; while China produces 98% of the world’s gallium, according to the US Geological Survey, and has choked off exports to the US, Epirus’s chip supplier uses recycled gallium from Japan. 

    The other outside challenge may be that Epirus isn’t the only company building a drone zapper. One of China’s state-owned defense companies has been working on its own anti-drone high-powered microwave weapon called the Hurricane, which it displayed at a major military show in late 2024. 

    It may be a sign that anti-electronics force fields will become common among the world’s militaries—and if so, the future of war is unlikely to go back to the status quo ante, and it might zag in a different direction yet again. But military planners believe it’s crucial for the US not to be left behind. So if it works as promised, Epirus could very well change the way that war will play out in the coming decade. 

    While Miller, the Army CTO, can’t speak directly to Epirus or any specific system, he will say that he believes anti-drone measures are going to have to become ubiquitous for US soldiers. “Counter-UASunfortunately is going to be like counter-IED,” he says. “It’s going to be every soldier’s job to think about UAS threats the same way it was to think about IEDs.” 

    And, he adds, it’s his job and his colleagues’ to make sure that tech so effective it works like “almost magic” is in the hands of the average rifleman. To that end, Lowery told me, Epirus is designing the Leonidas control system to work simply for troops, allowing them to identify a cluster of targets and start zapping with just a click of a button—but only extensive use in the field can prove that out.

    Epirus CEO Andy Lowery sees the Leonidas as providing a last line of defense against UAVs.EPIRUS

    In the not-too-distant future, Lowery says, this could mean setting up along the US-Mexico border. But the grandest vision for Epirus’s tech that he says he’s heard is for a city-scale Leonidas along the lines of a ballistic missile defense radar system called PAVE PAWS, which takes up an entire 105-foot-tall building and can detect distant nuclear missile launches. The US set up four in the 1980s, and Taiwan currently has one up on a mountain south of Taipei. Fill a similar-size building full of microwave emitters, and the beam could reach out “10 or 15 miles,” Lowery told me, with one sitting sentinel over Taipei in the north and another over Kaohsiung in the south of Taiwan.

    Riffing in Greek mythological mode, Lowery said of drones, “I call all these mischief makers. Whether they’re doing drugs or guns across the border or they’re flying over Langleythey’re spying on F-35s, they’re all like Icarus. You remember Icarus, with his wax wings? Flying all around—‘Nobody’s going to touch me, nobody’s going to ever hurt me.’”

    “We built one hell of a wax-wing melter.” 

    Sam Dean is a reporter focusing on business, tech, and defense. He is writing a book about the recent history of Silicon Valley returning to work with the Pentagon for Viking Press and covering the defense tech industry for a number of publications. Previously, he was a business reporter at the Los Angeles Times.

    This piece has been updated to clarify that Alex Miller is a civilian intelligence official. 
    #this #giant #microwave #change #future
    This giant microwave may change the future of war
    Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.  Maybe it sounds like a new Michael Bay movie, but it’s the scenario that keeps the chief technology officer of the US Army up at night. “I’m hesitant to say it out loud so I don’t manifest it,” says Alex Miller, a longtime Army intelligence official who became the CTO to the Army’s chief of staff in 2023. Even if World War III doesn’t break out in the South China Sea, every US military installation around the world is vulnerable to the same tactics—as are the militaries of every other country around the world. The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.  While the US has precision missiles that can shoot these drones down, they don’t always succeed: A drone attack killed three US soldiers and injured dozens more at a base in the Jordanian desert last year. And each American missile costs orders of magnitude more than its targets, which limits their supply; countering thousand-dollar drones with missiles that cost hundreds of thousands, or even millions, of dollars per shot can only work for so long, even with a defense budget that could reach a trillion dollars next year. The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. There are drones that slam into other drones like battering rams; drones that shoot out nets to ensnare quadcopter propellers; precision-guided Gatling guns that simply shoot drones out of the sky; electronic approaches, like GPS jammers and direct hacking tools; and lasers that melt holes clear through a target’s side. Then there are the microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up.  That’s where Epirus comes in.  When I went to visit the HQ of this 185-person startup in Torrance, California, earlier this year, I got a behind-the-scenes look at its massive microwave, called Leonidas, which the US Army is already betting on as a cutting-edge anti-drone weapon. The Army awarded Epirus a million contract in early 2023, topped that up with another million last fall, and is currently deploying a handful of the systems for testing with US troops in the Middle East and the Pacific.  Up close, the Leonidas that Epirus built for the Army looks like a two-foot-thick slab of metal the size of a garage door stuck on a swivel mount. Pop the back cover, and you can see that the slab is filled with dozens of individual microwave amplifier units in a grid. Each is about the size of a safe-deposit box and built around a chip made of gallium nitride, a semiconductor that can survive much higher voltages and temperatures than the typical silicon.  Leonidas sits on top of a trailer that a standard-issue Army truck can tow, and when it is powered on, the company’s software tells the grid of amps and antennas to shape the electromagnetic waves they’re blasting out with a phased array, precisely overlapping the microwave signals to mold the energy into a focused beam. Instead of needing to physically point a gun or parabolic dish at each of a thousand incoming drones, the Leonidas can flick between them at the speed of software. The Leonidas contains dozens of microwave amplifier units and can pivot to direct waves at incoming swarms of drones.EPIRUS Of course, this isn’t magic—there are practical limits on how much damage one array can do, and at what range—but the total effect could be described as an electromagnetic pulse emitter, a death ray for electronics, or a force field that could set up a protective barrier around military installations and drop drones the way a bug zapper fizzles a mob of mosquitoes. I walked through the nonclassified sections of the Leonidas factory floor, where a cluster of engineers working on weaponeering—the military term for figuring out exactly how much of a weapon, be it high explosive or microwave beam, is necessary to achieve a desired effect—ran tests in a warren of smaller anechoic rooms. Inside, they shot individual microwave units at a broad range of commercial and military drones, cycling through waveforms and power levels to try to find the signal that could fry each one with maximum efficiency.  On a live video feed from inside one of these foam-padded rooms, I watched a quadcopter drone spin its propellers and then, once the microwave emitter turned on, instantly stop short—first the propeller on the front left and then the rest. A drone hit with a Leonidas beam doesn’t explode—it just falls. Compared with the blast of a missile or the sizzle of a laser, it doesn’t look like much. But it could force enemies to come up with costlier ways of attacking that reduce the advantage of the drone swarm, and it could get around the inherent limitations of purely electronic or strictly physical defense systems. It could save lives. Epirus CEO Andy Lowery, a tall guy with sparkplug energy and a rapid-fire southern Illinois twang, doesn’t shy away from talking big about his product. As he told me during my visit, Leonidas is intended to lead a last stand, like the Spartan from whom the microwave takes its name—in this case, against hordes of unmanned aerial vehicles, or UAVs. While the actual range of the Leonidas system is kept secret, Lowery says the Army is looking for a solution that can reliably stop drones within a few kilometers. He told me, “They would like our system to be the owner of that final layer—to get any squeakers, any leakers, anything like that.” Now that they’ve told the world they “invented a force field,” Lowery added, the focus is on manufacturing at scale—before the drone swarms really start to descend or a nation with a major military decides to launch a new war. Before, in other words, Miller’s nightmare scenario becomes reality.  Why zap? Miller remembers well when the danger of small weaponized drones first appeared on his radar. Reports of Islamic State fighters strapping grenades to the bottom of commercial DJI Phantom quadcopters first emerged in late 2016 during the Battle of Mosul. “I went, ‘Oh, this is going to be bad,’ because basically it’s an airborne IED at that point,” he says. He’s tracked the danger as it’s built steadily since then, with advances in machine vision, AI coordination software, and suicide drone tactics only accelerating.  Then the war in Ukraine showed the world that cheap technology has fundamentally changed how warfare happens. We have watched in high-definition video how a cheap, off-the-shelf drone modified to carry a small bomb can be piloted directly into a faraway truck, tank, or group of troops to devastating effect. And larger suicide drones, also known as “loitering munitions,” can be produced for just tens of thousands of dollars and launched in massive salvos to hit soft targets or overwhelm more advanced military defenses through sheer numbers.  As a result, Miller, along with large swaths of the Pentagon and DC policy circles, believes that the current US arsenal for defending against these weapons is just too expensive and the tools in too short supply to truly match the threat. Just look at Yemen, a poor country where the Houthi military group has been under constant attack for the past decade. Armed with this new low-tech arsenal, in the past 18 months the rebel group has been able to bomb cargo ships and effectively disrupt global shipping in the Red Sea—part of an effort to apply pressure on Israel to stop its war in Gaza. The Houthis have also used missiles, suicide drones, and even drone boats to launch powerful attacks on US Navy ships sent to stop them. The most successful defense tech firm selling anti-drone weapons to the US military right now is Anduril, the company started by Palmer Luckey, the inventor of the Oculus VR headset, and a crew of cofounders from Oculus and defense data giant Palantir. In just the past few months, the Marines have chosen Anduril for counter-drone contracts that could be worth nearly million over the next decade, and the company has been working with Special Operations Command since 2022 on a counter-drone contract that could be worth nearly a billion dollars over a similar time frame. It’s unclear from the contracts what, exactly, Anduril is selling to each organization, but its weapons include electronic warfare jammers, jet-powered drone bombs, and propeller-driven Anvil drones designed to simply smash into enemy drones. In this arsenal, the cheapest way to stop a swarm of drones is electronic warfare: jamming the GPS or radio signals used to pilot the machines. But the intense drone battles in Ukraine have advanced the art of jamming and counter-jamming close to the point of stalemate. As a result, a new state of the art is emerging: unjammable drones that operate autonomously by using onboard processors to navigate via internal maps and computer vision, or even drones connected with 20-kilometer-long filaments of fiber-optic cable for tethered control. But unjammable doesn’t mean unzappable. Instead of using the scrambling method of a jammer, which employs an antenna to block the drone’s connection to a pilot or remote guidance system, the Leonidas microwave beam hits a drone body broadside. The energy finds its way into something electrical, whether the central flight controller or a tiny wire controlling a flap on a wing, to short-circuit whatever’s available.Tyler Miller, a senior systems engineer on Epirus’s weaponeering team, told me that they never know exactly which part of the target drone is going to go down first, but they’ve reliably seen the microwave signal get in somewhere to overload a circuit. “Based on the geometry and the way the wires are laid out,” he said, one of those wires is going to be the best path in. “Sometimes if we rotate the drone 90 degrees, you have a different motor go down first,” he added. The team has even tried wrapping target drones in copper tape, which would theoretically provide shielding, only to find that the microwave still finds a way in through moving propeller shafts or antennas that need to remain exposed for the drone to fly.  EPIRUS Leonidas also has an edge when it comes to downing a mass of drones at once. Physically hitting a drone out of the sky or lighting it up with a laser can be effective in situations where electronic warfare fails, but anti-drone drones can only take out one at a time, and lasers need to precisely aim and shoot. Epirus’s microwaves can damage everything in a roughly 60-degree arc from the Leonidas emitter simultaneously and keep on zapping and zapping; directed energy systems like this one never run out of ammo. As for cost, each Army Leonidas unit currently runs in the “low eight figures,” Lowery told me. Defense contract pricing can be opaque, but Epirus delivered four units for its million initial contract, giving a back-of-napkin price around million each. For comparison, Stinger missiles from Raytheon, which soldiers shoot at enemy aircraft or drones from a shoulder-mounted launcher, cost hundreds of thousands of dollars a pop, meaning the Leonidas could start costing lessafter it downs the first wave of a swarm. Raytheon’s radar, reversed Epirus is part of a new wave of venture-capital-backed defense companies trying to change the way weapons are created—and the way the Pentagon buys them. The largest defense companies, firms like Raytheon, Boeing, Northrop Grumman, and Lockheed Martin, typically develop new weapons in response to research grants and cost-plus contracts, in which the US Department of Defense guarantees a certain profit margin to firms building products that match their laundry list of technical specifications. These programs have kept the military supplied with cutting-edge weapons for decades, but the results may be exquisite pieces of military machinery delivered years late and billions of dollars over budget. Rather than building to minutely detailed specs, the new crop of military contractors aim to produce products on a quick time frame to solve a problem and then fine-tune them as they pitch to the military. The model, pioneered by Palantir and SpaceX, has since propelled companies like Anduril, Shield AI, and dozens of other smaller startups into the business of war as venture capital piles tens of billions of dollars into defense. Like Anduril, Epirus has direct Palantir roots; it was cofounded by Joe Lonsdale, who also cofounded Palantir, and John Tenet, Lonsdale’s colleague at the time at his venture fund, 8VC.  While Epirus is doing business in the new mode, its roots are in the old—specifically in Raytheon, a pioneer in the field of microwave technology. Cofounded by MIT professor Vannevar Bush in 1922, it manufactured vacuum tubes, like those found in old radios. But the company became synonymous with electronic defense during World War II, when Bush spun up a lab to develop early microwave radar technology invented by the British into a workable product, and Raytheon then began mass-producing microwave tubes—known as magnetrons—for the US war effort. By the end of the war in 1945, Raytheon was making 80% of the magnetrons powering Allied radar across the world. From padded foam chambers at the Epirus HQ, Leonidas devices can be safely tested on drones.EPIRUS Large tubes remained the best way to emit high-power microwaves for more than half a century, handily outperforming silicon-based solid-state amplifiers. They’re still around—the microwave on your kitchen counter runs on a vacuum tube magnetron. But tubes have downsides: They’re hot, they’re big, and they require upkeep.By the 2000s, new methods of building solid-state amplifiers out of materials like gallium nitride started to mature and were able to handle more power than silicon without melting or shorting out. The US Navy spent hundreds of millions of dollars on cutting-edge microwave contracts, one for a project at Raytheon called Next Generation Jammer—geared specifically toward designing a new way to make high-powered microwaves that work at extremely long distances. Lowery, the Epirus CEO, began his career working on nuclear reactors on Navy aircraft carriers before he became the chief engineer for Next Generation Jammer at Raytheon in 2010. There, he and his team worked on a system that relied on many of the same fundamentals that now power the Leonidas—using the same type of amplifier material and antenna setup to fry the electronics of a small target at much closer range rather than disrupting the radar of a target hundreds of miles away.  The similarity is not a coincidence: Two engineers from Next Generation Jammer helped launch Epirus in 2018. Lowery—who by then was working at the augmented-reality startup RealWear, which makes industrial smart glasses—joined Epirus in 2021 to run product development and was asked to take the top spot as CEO in 2023, as Leonidas became a fully formed machine. Much of the founding team has since departed for other projects, but Raytheon still runs through the company’s collective CV: ex-Raytheon radar engineer Matt Markel started in January as the new CTO, and Epirus’s chief engineer for defense, its VP of engineering, its VP of operations, and a number of employees all have Raytheon roots as well. Markel tells me that the Epirus way of working wouldn’t have flown at one of the big defense contractors: “They never would have tried spinning off the technology into a new application without a contract lined up.” The Epirus engineers saw the use case, raised money to start building Leonidas, and already had prototypes in the works before any military branch started awarding money to work on the project. Waiting for the starting gun On the wall of Lowery’s office are two mementos from testing days at an Army proving ground: a trophy wing from a larger drone, signed by the whole testing team, and a framed photo documenting the Leonidas’s carnage—a stack of dozens of inoperative drones piled up in a heap.  Despite what seems to have been an impressive test show, it’s still impossible from the outside to determine whether Epirus’s tech is ready to fully deliver if the swarms descend.  The Army would not comment specifically on the efficacy of any new weapons in testing or early deployment, including the Leonidas system. A spokesperson for the Army’s Rapid Capabilities and Critical Technologies Office, or RCCTO, which is the subsection responsible for contracting with Epirus to date, would only say in a statement that it is “committed to developing and fielding innovative Directed Energy solutions to address evolving threats.”  But various high-ranking officers appear to be giving Epirus a public vote of confidence. The three-star general who runs RCCTO and oversaw the Leonidas testing last summer told Breaking Defense that “the system actually worked very well,” even if there was work to be done on “how the weapon system fits into the larger kill chain.” And when former secretary of the Army Christine Wormuth, then the service’s highest-ranking civilian, gave a parting interview this past January, she mentioned Epirus in all but name, citing “one company” that is “using high-powered microwaves to basically be able to kill swarms of drones.” She called that kind of capability “critical for the Army.”  The Army isn’t the only branch interested in the microwave weapon. On Epirus’s factory floor when I visited, alongside the big beige Leonidases commissioned by the Army, engineers were building a smaller expeditionary version for the Marines, painted green, which it delivered in late April. Videos show that when it put some of its microwave emitters on a dock and tested them out for the Navy last summer, the microwaves left their targets dead in the water—successfully frying the circuits of outboard motors like the ones propelling Houthi drone boats.  Epirus is also currently working on an even smaller version of the Leonidas that can mount on top of the Army’s Stryker combat vehicles, and it’s testing out attaching a single microwave unit to a small airborne drone, which could work as a highly focused zapper to disable cars, data centers, or single enemy drones.  Epirus’s microwave technology is also being tested in devices smaller than the traditional Leonidas. EPIRUS While neither the Army nor the Navy has yet to announce a contract to start buying Epirus’s systems at scale, the company and its investors are actively preparing for the big orders to start rolling in. It raised million in a funding round in early March to get ready to make as many Leonidases as possible in the coming years, adding to the more than million it’s raised since opening its doors in 2018. “If you invent a force field that works,” Lowery boasts, “you really get a lot of attention.” The task for Epirus now, assuming that its main customers pull the trigger and start buying more Leonidases, is ramping up production while advancing the tech in its systems. Then there are the more prosaic problems of staffing, assembly, and testing at scale. For future generations, Lowery told me, the goal is refining the antenna design and integrating higher-powered microwave amplifiers to push the output into the tens of kilowatts, allowing for increased range and efficacy.  While this could be made harder by Trump’s global trade war, Lowery says he’s not worried about their supply chain; while China produces 98% of the world’s gallium, according to the US Geological Survey, and has choked off exports to the US, Epirus’s chip supplier uses recycled gallium from Japan.  The other outside challenge may be that Epirus isn’t the only company building a drone zapper. One of China’s state-owned defense companies has been working on its own anti-drone high-powered microwave weapon called the Hurricane, which it displayed at a major military show in late 2024.  It may be a sign that anti-electronics force fields will become common among the world’s militaries—and if so, the future of war is unlikely to go back to the status quo ante, and it might zag in a different direction yet again. But military planners believe it’s crucial for the US not to be left behind. So if it works as promised, Epirus could very well change the way that war will play out in the coming decade.  While Miller, the Army CTO, can’t speak directly to Epirus or any specific system, he will say that he believes anti-drone measures are going to have to become ubiquitous for US soldiers. “Counter-UASunfortunately is going to be like counter-IED,” he says. “It’s going to be every soldier’s job to think about UAS threats the same way it was to think about IEDs.”  And, he adds, it’s his job and his colleagues’ to make sure that tech so effective it works like “almost magic” is in the hands of the average rifleman. To that end, Lowery told me, Epirus is designing the Leonidas control system to work simply for troops, allowing them to identify a cluster of targets and start zapping with just a click of a button—but only extensive use in the field can prove that out. Epirus CEO Andy Lowery sees the Leonidas as providing a last line of defense against UAVs.EPIRUS In the not-too-distant future, Lowery says, this could mean setting up along the US-Mexico border. But the grandest vision for Epirus’s tech that he says he’s heard is for a city-scale Leonidas along the lines of a ballistic missile defense radar system called PAVE PAWS, which takes up an entire 105-foot-tall building and can detect distant nuclear missile launches. The US set up four in the 1980s, and Taiwan currently has one up on a mountain south of Taipei. Fill a similar-size building full of microwave emitters, and the beam could reach out “10 or 15 miles,” Lowery told me, with one sitting sentinel over Taipei in the north and another over Kaohsiung in the south of Taiwan. Riffing in Greek mythological mode, Lowery said of drones, “I call all these mischief makers. Whether they’re doing drugs or guns across the border or they’re flying over Langleythey’re spying on F-35s, they’re all like Icarus. You remember Icarus, with his wax wings? Flying all around—‘Nobody’s going to touch me, nobody’s going to ever hurt me.’” “We built one hell of a wax-wing melter.”  Sam Dean is a reporter focusing on business, tech, and defense. He is writing a book about the recent history of Silicon Valley returning to work with the Pentagon for Viking Press and covering the defense tech industry for a number of publications. Previously, he was a business reporter at the Los Angeles Times. This piece has been updated to clarify that Alex Miller is a civilian intelligence official.  #this #giant #microwave #change #future
    WWW.TECHNOLOGYREVIEW.COM
    This giant microwave may change the future of war
    Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.  Maybe it sounds like a new Michael Bay movie, but it’s the scenario that keeps the chief technology officer of the US Army up at night. “I’m hesitant to say it out loud so I don’t manifest it,” says Alex Miller, a longtime Army intelligence official who became the CTO to the Army’s chief of staff in 2023. Even if World War III doesn’t break out in the South China Sea, every US military installation around the world is vulnerable to the same tactics—as are the militaries of every other country around the world. The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.  While the US has precision missiles that can shoot these drones down, they don’t always succeed: A drone attack killed three US soldiers and injured dozens more at a base in the Jordanian desert last year. And each American missile costs orders of magnitude more than its targets, which limits their supply; countering thousand-dollar drones with missiles that cost hundreds of thousands, or even millions, of dollars per shot can only work for so long, even with a defense budget that could reach a trillion dollars next year. The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. There are drones that slam into other drones like battering rams; drones that shoot out nets to ensnare quadcopter propellers; precision-guided Gatling guns that simply shoot drones out of the sky; electronic approaches, like GPS jammers and direct hacking tools; and lasers that melt holes clear through a target’s side. Then there are the microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up.  That’s where Epirus comes in.  When I went to visit the HQ of this 185-person startup in Torrance, California, earlier this year, I got a behind-the-scenes look at its massive microwave, called Leonidas, which the US Army is already betting on as a cutting-edge anti-drone weapon. The Army awarded Epirus a $66 million contract in early 2023, topped that up with another $17 million last fall, and is currently deploying a handful of the systems for testing with US troops in the Middle East and the Pacific. (The Army won’t get into specifics on the location of the weapons in the Middle East but published a report of a live-fire test in the Philippines in early May.)  Up close, the Leonidas that Epirus built for the Army looks like a two-foot-thick slab of metal the size of a garage door stuck on a swivel mount. Pop the back cover, and you can see that the slab is filled with dozens of individual microwave amplifier units in a grid. Each is about the size of a safe-deposit box and built around a chip made of gallium nitride, a semiconductor that can survive much higher voltages and temperatures than the typical silicon.  Leonidas sits on top of a trailer that a standard-issue Army truck can tow, and when it is powered on, the company’s software tells the grid of amps and antennas to shape the electromagnetic waves they’re blasting out with a phased array, precisely overlapping the microwave signals to mold the energy into a focused beam. Instead of needing to physically point a gun or parabolic dish at each of a thousand incoming drones, the Leonidas can flick between them at the speed of software. The Leonidas contains dozens of microwave amplifier units and can pivot to direct waves at incoming swarms of drones.EPIRUS Of course, this isn’t magic—there are practical limits on how much damage one array can do, and at what range—but the total effect could be described as an electromagnetic pulse emitter, a death ray for electronics, or a force field that could set up a protective barrier around military installations and drop drones the way a bug zapper fizzles a mob of mosquitoes. I walked through the nonclassified sections of the Leonidas factory floor, where a cluster of engineers working on weaponeering—the military term for figuring out exactly how much of a weapon, be it high explosive or microwave beam, is necessary to achieve a desired effect—ran tests in a warren of smaller anechoic rooms. Inside, they shot individual microwave units at a broad range of commercial and military drones, cycling through waveforms and power levels to try to find the signal that could fry each one with maximum efficiency.  On a live video feed from inside one of these foam-padded rooms, I watched a quadcopter drone spin its propellers and then, once the microwave emitter turned on, instantly stop short—first the propeller on the front left and then the rest. A drone hit with a Leonidas beam doesn’t explode—it just falls. Compared with the blast of a missile or the sizzle of a laser, it doesn’t look like much. But it could force enemies to come up with costlier ways of attacking that reduce the advantage of the drone swarm, and it could get around the inherent limitations of purely electronic or strictly physical defense systems. It could save lives. Epirus CEO Andy Lowery, a tall guy with sparkplug energy and a rapid-fire southern Illinois twang, doesn’t shy away from talking big about his product. As he told me during my visit, Leonidas is intended to lead a last stand, like the Spartan from whom the microwave takes its name—in this case, against hordes of unmanned aerial vehicles, or UAVs. While the actual range of the Leonidas system is kept secret, Lowery says the Army is looking for a solution that can reliably stop drones within a few kilometers. He told me, “They would like our system to be the owner of that final layer—to get any squeakers, any leakers, anything like that.” Now that they’ve told the world they “invented a force field,” Lowery added, the focus is on manufacturing at scale—before the drone swarms really start to descend or a nation with a major military decides to launch a new war. Before, in other words, Miller’s nightmare scenario becomes reality.  Why zap? Miller remembers well when the danger of small weaponized drones first appeared on his radar. Reports of Islamic State fighters strapping grenades to the bottom of commercial DJI Phantom quadcopters first emerged in late 2016 during the Battle of Mosul. “I went, ‘Oh, this is going to be bad,’ because basically it’s an airborne IED at that point,” he says. He’s tracked the danger as it’s built steadily since then, with advances in machine vision, AI coordination software, and suicide drone tactics only accelerating.  Then the war in Ukraine showed the world that cheap technology has fundamentally changed how warfare happens. We have watched in high-definition video how a cheap, off-the-shelf drone modified to carry a small bomb can be piloted directly into a faraway truck, tank, or group of troops to devastating effect. And larger suicide drones, also known as “loitering munitions,” can be produced for just tens of thousands of dollars and launched in massive salvos to hit soft targets or overwhelm more advanced military defenses through sheer numbers.  As a result, Miller, along with large swaths of the Pentagon and DC policy circles, believes that the current US arsenal for defending against these weapons is just too expensive and the tools in too short supply to truly match the threat. Just look at Yemen, a poor country where the Houthi military group has been under constant attack for the past decade. Armed with this new low-tech arsenal, in the past 18 months the rebel group has been able to bomb cargo ships and effectively disrupt global shipping in the Red Sea—part of an effort to apply pressure on Israel to stop its war in Gaza. The Houthis have also used missiles, suicide drones, and even drone boats to launch powerful attacks on US Navy ships sent to stop them. The most successful defense tech firm selling anti-drone weapons to the US military right now is Anduril, the company started by Palmer Luckey, the inventor of the Oculus VR headset, and a crew of cofounders from Oculus and defense data giant Palantir. In just the past few months, the Marines have chosen Anduril for counter-drone contracts that could be worth nearly $850 million over the next decade, and the company has been working with Special Operations Command since 2022 on a counter-drone contract that could be worth nearly a billion dollars over a similar time frame. It’s unclear from the contracts what, exactly, Anduril is selling to each organization, but its weapons include electronic warfare jammers, jet-powered drone bombs, and propeller-driven Anvil drones designed to simply smash into enemy drones. In this arsenal, the cheapest way to stop a swarm of drones is electronic warfare: jamming the GPS or radio signals used to pilot the machines. But the intense drone battles in Ukraine have advanced the art of jamming and counter-jamming close to the point of stalemate. As a result, a new state of the art is emerging: unjammable drones that operate autonomously by using onboard processors to navigate via internal maps and computer vision, or even drones connected with 20-kilometer-long filaments of fiber-optic cable for tethered control. But unjammable doesn’t mean unzappable. Instead of using the scrambling method of a jammer, which employs an antenna to block the drone’s connection to a pilot or remote guidance system, the Leonidas microwave beam hits a drone body broadside. The energy finds its way into something electrical, whether the central flight controller or a tiny wire controlling a flap on a wing, to short-circuit whatever’s available. (The company also says that this targeted hit of energy allows birds and other wildlife to continue to move safely.) Tyler Miller, a senior systems engineer on Epirus’s weaponeering team, told me that they never know exactly which part of the target drone is going to go down first, but they’ve reliably seen the microwave signal get in somewhere to overload a circuit. “Based on the geometry and the way the wires are laid out,” he said, one of those wires is going to be the best path in. “Sometimes if we rotate the drone 90 degrees, you have a different motor go down first,” he added. The team has even tried wrapping target drones in copper tape, which would theoretically provide shielding, only to find that the microwave still finds a way in through moving propeller shafts or antennas that need to remain exposed for the drone to fly.  EPIRUS Leonidas also has an edge when it comes to downing a mass of drones at once. Physically hitting a drone out of the sky or lighting it up with a laser can be effective in situations where electronic warfare fails, but anti-drone drones can only take out one at a time, and lasers need to precisely aim and shoot. Epirus’s microwaves can damage everything in a roughly 60-degree arc from the Leonidas emitter simultaneously and keep on zapping and zapping; directed energy systems like this one never run out of ammo. As for cost, each Army Leonidas unit currently runs in the “low eight figures,” Lowery told me. Defense contract pricing can be opaque, but Epirus delivered four units for its $66 million initial contract, giving a back-of-napkin price around $16.5 million each. For comparison, Stinger missiles from Raytheon, which soldiers shoot at enemy aircraft or drones from a shoulder-mounted launcher, cost hundreds of thousands of dollars a pop, meaning the Leonidas could start costing less (and keep shooting) after it downs the first wave of a swarm. Raytheon’s radar, reversed Epirus is part of a new wave of venture-capital-backed defense companies trying to change the way weapons are created—and the way the Pentagon buys them. The largest defense companies, firms like Raytheon, Boeing, Northrop Grumman, and Lockheed Martin, typically develop new weapons in response to research grants and cost-plus contracts, in which the US Department of Defense guarantees a certain profit margin to firms building products that match their laundry list of technical specifications. These programs have kept the military supplied with cutting-edge weapons for decades, but the results may be exquisite pieces of military machinery delivered years late and billions of dollars over budget. Rather than building to minutely detailed specs, the new crop of military contractors aim to produce products on a quick time frame to solve a problem and then fine-tune them as they pitch to the military. The model, pioneered by Palantir and SpaceX, has since propelled companies like Anduril, Shield AI, and dozens of other smaller startups into the business of war as venture capital piles tens of billions of dollars into defense. Like Anduril, Epirus has direct Palantir roots; it was cofounded by Joe Lonsdale, who also cofounded Palantir, and John Tenet, Lonsdale’s colleague at the time at his venture fund, 8VC. (Tenet, the son of former CIA director George Tenet, may have inspired the company’s name—the elder Tenet’s parents were born in the Epirus region in the northwest of Greece. But the company more often says it’s a reference to the pseudo-mythological Epirus Bow from the 2011 fantasy action movie Immortals, which never runs out of arrows.)  While Epirus is doing business in the new mode, its roots are in the old—specifically in Raytheon, a pioneer in the field of microwave technology. Cofounded by MIT professor Vannevar Bush in 1922, it manufactured vacuum tubes, like those found in old radios. But the company became synonymous with electronic defense during World War II, when Bush spun up a lab to develop early microwave radar technology invented by the British into a workable product, and Raytheon then began mass-producing microwave tubes—known as magnetrons—for the US war effort. By the end of the war in 1945, Raytheon was making 80% of the magnetrons powering Allied radar across the world. From padded foam chambers at the Epirus HQ, Leonidas devices can be safely tested on drones.EPIRUS Large tubes remained the best way to emit high-power microwaves for more than half a century, handily outperforming silicon-based solid-state amplifiers. They’re still around—the microwave on your kitchen counter runs on a vacuum tube magnetron. But tubes have downsides: They’re hot, they’re big, and they require upkeep. (In fact, the other microwave drone zapper currently in the Pentagon pipeline, the Tactical High-power Operational Responder, or THOR, still relies on a physical vacuum tube. It’s reported to be effective at downing drones in tests but takes up a whole shipping container and needs a dish antenna to zap its targets.) By the 2000s, new methods of building solid-state amplifiers out of materials like gallium nitride started to mature and were able to handle more power than silicon without melting or shorting out. The US Navy spent hundreds of millions of dollars on cutting-edge microwave contracts, one for a project at Raytheon called Next Generation Jammer—geared specifically toward designing a new way to make high-powered microwaves that work at extremely long distances. Lowery, the Epirus CEO, began his career working on nuclear reactors on Navy aircraft carriers before he became the chief engineer for Next Generation Jammer at Raytheon in 2010. There, he and his team worked on a system that relied on many of the same fundamentals that now power the Leonidas—using the same type of amplifier material and antenna setup to fry the electronics of a small target at much closer range rather than disrupting the radar of a target hundreds of miles away.  The similarity is not a coincidence: Two engineers from Next Generation Jammer helped launch Epirus in 2018. Lowery—who by then was working at the augmented-reality startup RealWear, which makes industrial smart glasses—joined Epirus in 2021 to run product development and was asked to take the top spot as CEO in 2023, as Leonidas became a fully formed machine. Much of the founding team has since departed for other projects, but Raytheon still runs through the company’s collective CV: ex-Raytheon radar engineer Matt Markel started in January as the new CTO, and Epirus’s chief engineer for defense, its VP of engineering, its VP of operations, and a number of employees all have Raytheon roots as well. Markel tells me that the Epirus way of working wouldn’t have flown at one of the big defense contractors: “They never would have tried spinning off the technology into a new application without a contract lined up.” The Epirus engineers saw the use case, raised money to start building Leonidas, and already had prototypes in the works before any military branch started awarding money to work on the project. Waiting for the starting gun On the wall of Lowery’s office are two mementos from testing days at an Army proving ground: a trophy wing from a larger drone, signed by the whole testing team, and a framed photo documenting the Leonidas’s carnage—a stack of dozens of inoperative drones piled up in a heap.  Despite what seems to have been an impressive test show, it’s still impossible from the outside to determine whether Epirus’s tech is ready to fully deliver if the swarms descend.  The Army would not comment specifically on the efficacy of any new weapons in testing or early deployment, including the Leonidas system. A spokesperson for the Army’s Rapid Capabilities and Critical Technologies Office, or RCCTO, which is the subsection responsible for contracting with Epirus to date, would only say in a statement that it is “committed to developing and fielding innovative Directed Energy solutions to address evolving threats.”  But various high-ranking officers appear to be giving Epirus a public vote of confidence. The three-star general who runs RCCTO and oversaw the Leonidas testing last summer told Breaking Defense that “the system actually worked very well,” even if there was work to be done on “how the weapon system fits into the larger kill chain.” And when former secretary of the Army Christine Wormuth, then the service’s highest-ranking civilian, gave a parting interview this past January, she mentioned Epirus in all but name, citing “one company” that is “using high-powered microwaves to basically be able to kill swarms of drones.” She called that kind of capability “critical for the Army.”  The Army isn’t the only branch interested in the microwave weapon. On Epirus’s factory floor when I visited, alongside the big beige Leonidases commissioned by the Army, engineers were building a smaller expeditionary version for the Marines, painted green, which it delivered in late April. Videos show that when it put some of its microwave emitters on a dock and tested them out for the Navy last summer, the microwaves left their targets dead in the water—successfully frying the circuits of outboard motors like the ones propelling Houthi drone boats.  Epirus is also currently working on an even smaller version of the Leonidas that can mount on top of the Army’s Stryker combat vehicles, and it’s testing out attaching a single microwave unit to a small airborne drone, which could work as a highly focused zapper to disable cars, data centers, or single enemy drones.  Epirus’s microwave technology is also being tested in devices smaller than the traditional Leonidas. EPIRUS While neither the Army nor the Navy has yet to announce a contract to start buying Epirus’s systems at scale, the company and its investors are actively preparing for the big orders to start rolling in. It raised $250 million in a funding round in early March to get ready to make as many Leonidases as possible in the coming years, adding to the more than $300 million it’s raised since opening its doors in 2018. “If you invent a force field that works,” Lowery boasts, “you really get a lot of attention.” The task for Epirus now, assuming that its main customers pull the trigger and start buying more Leonidases, is ramping up production while advancing the tech in its systems. Then there are the more prosaic problems of staffing, assembly, and testing at scale. For future generations, Lowery told me, the goal is refining the antenna design and integrating higher-powered microwave amplifiers to push the output into the tens of kilowatts, allowing for increased range and efficacy.  While this could be made harder by Trump’s global trade war, Lowery says he’s not worried about their supply chain; while China produces 98% of the world’s gallium, according to the US Geological Survey, and has choked off exports to the US, Epirus’s chip supplier uses recycled gallium from Japan.  The other outside challenge may be that Epirus isn’t the only company building a drone zapper. One of China’s state-owned defense companies has been working on its own anti-drone high-powered microwave weapon called the Hurricane, which it displayed at a major military show in late 2024.  It may be a sign that anti-electronics force fields will become common among the world’s militaries—and if so, the future of war is unlikely to go back to the status quo ante, and it might zag in a different direction yet again. But military planners believe it’s crucial for the US not to be left behind. So if it works as promised, Epirus could very well change the way that war will play out in the coming decade.  While Miller, the Army CTO, can’t speak directly to Epirus or any specific system, he will say that he believes anti-drone measures are going to have to become ubiquitous for US soldiers. “Counter-UAS [Unmanned Aircraft System] unfortunately is going to be like counter-IED,” he says. “It’s going to be every soldier’s job to think about UAS threats the same way it was to think about IEDs.”  And, he adds, it’s his job and his colleagues’ to make sure that tech so effective it works like “almost magic” is in the hands of the average rifleman. To that end, Lowery told me, Epirus is designing the Leonidas control system to work simply for troops, allowing them to identify a cluster of targets and start zapping with just a click of a button—but only extensive use in the field can prove that out. Epirus CEO Andy Lowery sees the Leonidas as providing a last line of defense against UAVs.EPIRUS In the not-too-distant future, Lowery says, this could mean setting up along the US-Mexico border. But the grandest vision for Epirus’s tech that he says he’s heard is for a city-scale Leonidas along the lines of a ballistic missile defense radar system called PAVE PAWS, which takes up an entire 105-foot-tall building and can detect distant nuclear missile launches. The US set up four in the 1980s, and Taiwan currently has one up on a mountain south of Taipei. Fill a similar-size building full of microwave emitters, and the beam could reach out “10 or 15 miles,” Lowery told me, with one sitting sentinel over Taipei in the north and another over Kaohsiung in the south of Taiwan. Riffing in Greek mythological mode, Lowery said of drones, “I call all these mischief makers. Whether they’re doing drugs or guns across the border or they’re flying over Langley [or] they’re spying on F-35s, they’re all like Icarus. You remember Icarus, with his wax wings? Flying all around—‘Nobody’s going to touch me, nobody’s going to ever hurt me.’” “We built one hell of a wax-wing melter.”  Sam Dean is a reporter focusing on business, tech, and defense. He is writing a book about the recent history of Silicon Valley returning to work with the Pentagon for Viking Press and covering the defense tech industry for a number of publications. Previously, he was a business reporter at the Los Angeles Times. This piece has been updated to clarify that Alex Miller is a civilian intelligence official. 
    0 Commentarii 0 Distribuiri 0 previzualizare
  • What will power AI’s growth?

    It’s been a little over a week since we published Power Hungry, a package that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down the centerpiece of that package, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. I thought this was an important addition to the mix of stories we put together, because I’ve seen a lot of promises about nuclear power as a saving grace in the face of AI’s energy demand. My reporting on the industry over the past few years has left me a little skeptical. 

    As I discovered while I continued that line of reporting, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story for the package, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Let’s get into it. 

    When we started talking about this big project on AI and energy demand, we had a lot of conversations about what to include. And from the beginning, the climate team was really focused on examining what, exactly, was going to be providing the electricity needed to run data centers powering AI models. As we wrote in the main story: 

    “A data center humming away isn’t necessarily a bad thing. If all data centers were hooked up to solar panels and ran only when the sun was shining, the world would be talking a lot less about AI’s energy consumption.” 

    But a lot of AI data centers need to be available constantly. Those that are used to train models can arguably be more responsive to the changing availability of renewables, since that work can happen in bursts, any time. Once a model is being pinged with questions from the public, though, there needs to be computing power ready to run all the time. Google, for example, would likely not be too keen on having people be able to use its new AI Mode only during daylight hours.

    Solar and wind power, then, would seem not to be a great fit for a lot of AI electricity demand, unless they’re paired with energy storage—and that increases costs. Nuclear power plants, on the other hand, tend to run constantly, outputting a steady source of power for the grid. 

    As you might imagine, though, it can take a long time to get a nuclear power plant up and running. 

    Large tech companies can help support plans to reopen shuttered plants or existing plants’ efforts to extend their operating lifetimes. There are also some existing plants that can make small upgrades to improve their output. I just saw this news story from the Tri-City Herald about plans to upgrade the Columbia Generating Station in eastern Washington—with tweaks over the next few years, it could produce an additional 162 megawatts of power, over 10% of the plant’s current capacity. 

    But all that isn’t going to be nearly enough to meet the demand that big tech companies are claiming will materialize in the future. 

    Instead, natural gas has become the default to meet soaring demand from data centers, as David lays out in his story. And since the lifetime of plants built today is about 30 years, those new plants could be running past 2050, the date the world needs to bring greenhouse-gas emissions to net zero to meet the goals set out in the Paris climate agreement. 

    One of the bits I found most interesting in David’s story is that there’s potential for a different future here: Big tech companies, with their power and influence, could actually use this moment to push for improvements. If they reduced their usage during peak hours, even for less than 1% of the year, it could greatly reduce the amount of new energy infrastructure required. Or they could, at the very least, push power plant owners and operators to install carbon capture technology, or ensure that methane doesn’t leak from the supply chain.

    AI’s energy demand is a big deal, but for climate change, how we choose to meet it is potentially an even bigger one. 
    #what #will #power #ais #growth
    What will power AI’s growth?
    It’s been a little over a week since we published Power Hungry, a package that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down the centerpiece of that package, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. I thought this was an important addition to the mix of stories we put together, because I’ve seen a lot of promises about nuclear power as a saving grace in the face of AI’s energy demand. My reporting on the industry over the past few years has left me a little skeptical.  As I discovered while I continued that line of reporting, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story for the package, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Let’s get into it.  When we started talking about this big project on AI and energy demand, we had a lot of conversations about what to include. And from the beginning, the climate team was really focused on examining what, exactly, was going to be providing the electricity needed to run data centers powering AI models. As we wrote in the main story:  “A data center humming away isn’t necessarily a bad thing. If all data centers were hooked up to solar panels and ran only when the sun was shining, the world would be talking a lot less about AI’s energy consumption.”  But a lot of AI data centers need to be available constantly. Those that are used to train models can arguably be more responsive to the changing availability of renewables, since that work can happen in bursts, any time. Once a model is being pinged with questions from the public, though, there needs to be computing power ready to run all the time. Google, for example, would likely not be too keen on having people be able to use its new AI Mode only during daylight hours. Solar and wind power, then, would seem not to be a great fit for a lot of AI electricity demand, unless they’re paired with energy storage—and that increases costs. Nuclear power plants, on the other hand, tend to run constantly, outputting a steady source of power for the grid.  As you might imagine, though, it can take a long time to get a nuclear power plant up and running.  Large tech companies can help support plans to reopen shuttered plants or existing plants’ efforts to extend their operating lifetimes. There are also some existing plants that can make small upgrades to improve their output. I just saw this news story from the Tri-City Herald about plans to upgrade the Columbia Generating Station in eastern Washington—with tweaks over the next few years, it could produce an additional 162 megawatts of power, over 10% of the plant’s current capacity.  But all that isn’t going to be nearly enough to meet the demand that big tech companies are claiming will materialize in the future.  Instead, natural gas has become the default to meet soaring demand from data centers, as David lays out in his story. And since the lifetime of plants built today is about 30 years, those new plants could be running past 2050, the date the world needs to bring greenhouse-gas emissions to net zero to meet the goals set out in the Paris climate agreement.  One of the bits I found most interesting in David’s story is that there’s potential for a different future here: Big tech companies, with their power and influence, could actually use this moment to push for improvements. If they reduced their usage during peak hours, even for less than 1% of the year, it could greatly reduce the amount of new energy infrastructure required. Or they could, at the very least, push power plant owners and operators to install carbon capture technology, or ensure that methane doesn’t leak from the supply chain. AI’s energy demand is a big deal, but for climate change, how we choose to meet it is potentially an even bigger one.  #what #will #power #ais #growth
    WWW.TECHNOLOGYREVIEW.COM
    What will power AI’s growth?
    It’s been a little over a week since we published Power Hungry, a package that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down the centerpiece of that package, an analysis I did with my colleague James O’Donnell. (In case you’re still looking for an intro, you can check out this Roundtable discussion with James and our editor in chief Mat Honan, or this short segment I did on Science Friday.) But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. I thought this was an important addition to the mix of stories we put together, because I’ve seen a lot of promises about nuclear power as a saving grace in the face of AI’s energy demand. My reporting on the industry over the past few years has left me a little skeptical.  As I discovered while I continued that line of reporting, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story for the package, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Let’s get into it.  When we started talking about this big project on AI and energy demand, we had a lot of conversations about what to include. And from the beginning, the climate team was really focused on examining what, exactly, was going to be providing the electricity needed to run data centers powering AI models. As we wrote in the main story:  “A data center humming away isn’t necessarily a bad thing. If all data centers were hooked up to solar panels and ran only when the sun was shining, the world would be talking a lot less about AI’s energy consumption.”  But a lot of AI data centers need to be available constantly. Those that are used to train models can arguably be more responsive to the changing availability of renewables, since that work can happen in bursts, any time. Once a model is being pinged with questions from the public, though, there needs to be computing power ready to run all the time. Google, for example, would likely not be too keen on having people be able to use its new AI Mode only during daylight hours. Solar and wind power, then, would seem not to be a great fit for a lot of AI electricity demand, unless they’re paired with energy storage—and that increases costs. Nuclear power plants, on the other hand, tend to run constantly, outputting a steady source of power for the grid.  As you might imagine, though, it can take a long time to get a nuclear power plant up and running.  Large tech companies can help support plans to reopen shuttered plants or existing plants’ efforts to extend their operating lifetimes. There are also some existing plants that can make small upgrades to improve their output. I just saw this news story from the Tri-City Herald about plans to upgrade the Columbia Generating Station in eastern Washington—with tweaks over the next few years, it could produce an additional 162 megawatts of power, over 10% of the plant’s current capacity.  But all that isn’t going to be nearly enough to meet the demand that big tech companies are claiming will materialize in the future. (For more on the numbers here and why new tech isn’t going to come online fast enough, check out my full story.)  Instead, natural gas has become the default to meet soaring demand from data centers, as David lays out in his story. And since the lifetime of plants built today is about 30 years, those new plants could be running past 2050, the date the world needs to bring greenhouse-gas emissions to net zero to meet the goals set out in the Paris climate agreement.  One of the bits I found most interesting in David’s story is that there’s potential for a different future here: Big tech companies, with their power and influence, could actually use this moment to push for improvements. If they reduced their usage during peak hours, even for less than 1% of the year, it could greatly reduce the amount of new energy infrastructure required. Or they could, at the very least, push power plant owners and operators to install carbon capture technology, or ensure that methane doesn’t leak from the supply chain. AI’s energy demand is a big deal, but for climate change, how we choose to meet it is potentially an even bigger one. 
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Fueling seamless AI at scale

    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

    First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

    Silicon’s mid-life crisis

    AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

    For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

    As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

    But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

    Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

    Understanding models and paradigms

    The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

    Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

    New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

    The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

    Orchestrating AI

    As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

    With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

    Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

    Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

    More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

    Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

    How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

    Learn more about Arm’s approach to enabling AI everywhere.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #fueling #seamless #scale
    Fueling seamless AI at scale
    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #fueling #seamless #scale
    WWW.TECHNOLOGYREVIEW.COM
    Fueling seamless AI at scale
    From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • This benchmark used Reddit’s AITA to test how much AI models suck up to us

    Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic. 

    An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out.

    A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do.

    “We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.”

    It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France.

    While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.

    To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing. 

    To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITAsubreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI, Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.  

    Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of casesand accepting the way a user had framed the query in 90% of responses. The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set.

    But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions.

    “It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.”

    Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.” 

    While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation. 

    “We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails. 

    She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’”
    #this #benchmark #used #reddits #aita
    This benchmark used Reddit’s AITA to test how much AI models suck up to us
    Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.  An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out. A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do. “We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.” It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France. While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so. To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing.  To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITAsubreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI, Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.   Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of casesand accepting the way a user had framed the query in 90% of responses. The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions. “It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.” Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.”  While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation.  “We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails.  She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’” #this #benchmark #used #reddits #aita
    WWW.TECHNOLOGYREVIEW.COM
    This benchmark used Reddit’s AITA to test how much AI models suck up to us
    Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.  An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out. A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do. “We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.” It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France. While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so. To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing.  To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI (the version of GPT-4o they assessed was earlier than the version that the company later called too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.   Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of cases (versus 22% for humans) and accepting the way a user had framed the query in 90% of responses (versus 60% among humans). The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions. “It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.” Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.”  While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation.  “We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails.  She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’”
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: sycophantic LLMs, and the AI Hype Index

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    This benchmark used Reddit’s AITA to test how much AI models suck up to us

    Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story.

    —Rhiannon Williams

    The AI Hype Index

    Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision.+ Palmer Luckey wants to turn “warfighters into technomancers.”+ Luckey and Mark Zuckerberg have buried the hatchet, then.+ Palmer Luckey on the Pentagon’s future of mixed reality.2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March.+ Apple has pushed back on the law.3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week.+ Musk’s departure raises questions over how much power it will wield without him.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up.+ Is there a viable alternative?5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for.6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes.+ Google’s new AI-powered search isn’t fit to handle even basic queries.+ The company is pushing AI into everything. Will it pay off?+ Why Google’s AI Overviews gets things wrong.7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them.8 A popular vibe coding app has a major security flawDespite being notified about it months ago.+ Any AI coding program catering to amateurs faces the same issue.+ What is vibe coding, exactly?9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics.10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face.Quote of the day

    “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.”

    —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple.

    One more thing

    House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story.

    —Matthew Ponsford

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted+ When is a flute teacher not a flautist? When he’s a whistleblower.
    #download #sycophantic #llms #hype #index
    The Download: sycophantic LLMs, and the AI Hype Index
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This benchmark used Reddit’s AITA to test how much AI models suck up to us Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story. —Rhiannon Williams The AI Hype Index Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision.+ Palmer Luckey wants to turn “warfighters into technomancers.”+ Luckey and Mark Zuckerberg have buried the hatchet, then.+ Palmer Luckey on the Pentagon’s future of mixed reality.2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March.+ Apple has pushed back on the law.3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week.+ Musk’s departure raises questions over how much power it will wield without him.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up.+ Is there a viable alternative?5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for.6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes.+ Google’s new AI-powered search isn’t fit to handle even basic queries.+ The company is pushing AI into everything. Will it pay off?+ Why Google’s AI Overviews gets things wrong.7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them.8 A popular vibe coding app has a major security flawDespite being notified about it months ago.+ Any AI coding program catering to amateurs faces the same issue.+ What is vibe coding, exactly?9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics.10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face.Quote of the day “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.” —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple. One more thing House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story. —Matthew Ponsford We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted+ When is a flute teacher not a flautist? When he’s a whistleblower. #download #sycophantic #llms #hype #index
    WWW.TECHNOLOGYREVIEW.COM
    The Download: sycophantic LLMs, and the AI Hype Index
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This benchmark used Reddit’s AITA to test how much AI models suck up to us Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story. —Rhiannon Williams The AI Hype Index Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision. (WSJ $)+ Palmer Luckey wants to turn “warfighters into technomancers.” (TechCrunch)+ Luckey and Mark Zuckerberg have buried the hatchet, then. (Insider $)+ Palmer Luckey on the Pentagon’s future of mixed reality. (MIT Technology Review)2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March. (NYT $)+ Apple has pushed back on the law. (CNN)3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week. (WSJ $)+ Musk’s departure raises questions over how much power it will wield without him. (The Guardian)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up. (WP $)+ Is there a viable alternative? (New Scientist $) 5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for. (404 Media) 6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes. (Wired $)+ Google’s new AI-powered search isn’t fit to handle even basic queries. (NYT $)+ The company is pushing AI into everything. Will it pay off? (Vox)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them. (TechCrunch) 8 A popular vibe coding app has a major security flawDespite being notified about it months ago. (Semafor)+ Any AI coding program catering to amateurs faces the same issue. (The Information $)+ What is vibe coding, exactly? (MIT Technology Review) 9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics. (Ars Technica) 10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face. (IEEE Spectrum) Quote of the day “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.” —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple. One more thing House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than $420 million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story. —Matthew Ponsford We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted ($)+ When is a flute teacher not a flautist? When he’s a whistleblower.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The Download: the next anti-drone weapon, and powering AI’s growth

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    This giant microwave may change the future of war

    Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. 

    And one of these is microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. Read the full story.

    —Sam Dean

    This article is part of the Big Story series: MIT Technology Review’s most important, ambitious reporting that takes a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.

    What will power AI’s growth?

    Last week we published Power Hungry, a series that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down its centerpiece, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. As I discovered, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Read the full story.

    —Casey Crownhart

    This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Elon Musk is leaving his role in the Trump administration To focus on rebuilding the damaged brand reputations of Tesla and SpaceX.+ Musk has complained that DOGE has become a government scapegoat.+ Tesla shareholders have asked its board to lay out a succession plan.+ DOGE’s tech takeover threatens the safety and stability of our critical data.2 The US will start revoking the visas of Chinese studentsIncluding those studying in what the US government deems “critical fields.”+ It’s also ordered US chip software suppliers to stop selling to China.3 The US is storing the DNA of migrant childrenIt’s been uploaded into a criminal database to track them as they age.+ The US wants to use facial recognition to identify migrant children as they age.4 RFK Jr is threatening to ban federal scientists from top journalsInstead, they may be forced to publish in state-run alternatives.+ He accused major medical journals of being funded by Big Pharma.5 India and Pakistan are locked in disinformation warfareFalse reports and doctored images are circulating online.+ Fact checkers are working around the clock to debunk fake news.6 How North Korea is infiltrating remote jobs in the USWith the help of regular Americans.7 This Discord community is creating its own hair-growth drugsMen are going to extreme lengths to reverse their hair loss.8 Inside YouTube’s quest to dominate your living room It wants to move away from controversial clips and into prestige TV.9 Sergey Brin threatens AI models with physical violenceThe Google co-founder insists that it produces better results.10 It must be nice to be a moving day influencer They reap all of the benefits, with none of the stress.Quote of the day

    “I studied in the US because I loved what America is about: it’s open, inclusive and diverse. Now my students and I feel slapped in the face by Trump’s policy.”

    —Cathy Tu, a Chinese AI researcher, tells the Washington Post why many of her students are already applying to universities outside the US after the Trump administration announced a crackdown on visas for Chinese students.

    One more thing

    The second wave of AI coding is hereAsk people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.

    —Will Douglas Heaven

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If you’ve ever dreamed of owning a piece of cinematic history, more than 400 of David Lynch’s personal items are going up for auction.+ How accurate are those Hollywood films based on true stories? Let’s find out.+ Rest in peace Chicago Mike: the legendary hype man to Kool & the Gang.+ How to fully trust in one another.
    #download #next #antidrone #weapon #powering
    The Download: the next anti-drone weapon, and powering AI’s growth
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This giant microwave may change the future of war Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse.  And one of these is microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. Read the full story. —Sam Dean This article is part of the Big Story series: MIT Technology Review’s most important, ambitious reporting that takes a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here. What will power AI’s growth? Last week we published Power Hungry, a series that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down its centerpiece, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. As I discovered, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk is leaving his role in the Trump administration To focus on rebuilding the damaged brand reputations of Tesla and SpaceX.+ Musk has complained that DOGE has become a government scapegoat.+ Tesla shareholders have asked its board to lay out a succession plan.+ DOGE’s tech takeover threatens the safety and stability of our critical data.2 The US will start revoking the visas of Chinese studentsIncluding those studying in what the US government deems “critical fields.”+ It’s also ordered US chip software suppliers to stop selling to China.3 The US is storing the DNA of migrant childrenIt’s been uploaded into a criminal database to track them as they age.+ The US wants to use facial recognition to identify migrant children as they age.4 RFK Jr is threatening to ban federal scientists from top journalsInstead, they may be forced to publish in state-run alternatives.+ He accused major medical journals of being funded by Big Pharma.5 India and Pakistan are locked in disinformation warfareFalse reports and doctored images are circulating online.+ Fact checkers are working around the clock to debunk fake news.6 How North Korea is infiltrating remote jobs in the USWith the help of regular Americans.7 This Discord community is creating its own hair-growth drugsMen are going to extreme lengths to reverse their hair loss.8 Inside YouTube’s quest to dominate your living room It wants to move away from controversial clips and into prestige TV.9 Sergey Brin threatens AI models with physical violenceThe Google co-founder insists that it produces better results.10 It must be nice to be a moving day influencer They reap all of the benefits, with none of the stress.Quote of the day “I studied in the US because I loved what America is about: it’s open, inclusive and diverse. Now my students and I feel slapped in the face by Trump’s policy.” —Cathy Tu, a Chinese AI researcher, tells the Washington Post why many of her students are already applying to universities outside the US after the Trump administration announced a crackdown on visas for Chinese students. One more thing The second wave of AI coding is hereAsk people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story. —Will Douglas Heaven We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If you’ve ever dreamed of owning a piece of cinematic history, more than 400 of David Lynch’s personal items are going up for auction.+ How accurate are those Hollywood films based on true stories? Let’s find out.+ Rest in peace Chicago Mike: the legendary hype man to Kool & the Gang.+ How to fully trust in one another. #download #next #antidrone #weapon #powering
    WWW.TECHNOLOGYREVIEW.COM
    The Download: the next anti-drone weapon, and powering AI’s growth
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This giant microwave may change the future of war Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse.  And one of these is microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. Read the full story. —Sam Dean This article is part of the Big Story series: MIT Technology Review’s most important, ambitious reporting that takes a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here. What will power AI’s growth? Last week we published Power Hungry, a series that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down its centerpiece, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. As I discovered, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk is leaving his role in the Trump administration To focus on rebuilding the damaged brand reputations of Tesla and SpaceX. (Axios)+ Musk has complained that DOGE has become a government scapegoat. (WP $)+ Tesla shareholders have asked its board to lay out a succession plan. (CNN)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 2 The US will start revoking the visas of Chinese studentsIncluding those studying in what the US government deems “critical fields.” (Politico)+ It’s also ordered US chip software suppliers to stop selling to China. (FT $) 3 The US is storing the DNA of migrant childrenIt’s been uploaded into a criminal database to track them as they age. (Wired $)+ The US wants to use facial recognition to identify migrant children as they age. (MIT Technology Review) 4 RFK Jr is threatening to ban federal scientists from top journalsInstead, they may be forced to publish in state-run alternatives. (The Hill)+ He accused major medical journals of being funded by Big Pharma. (Stat) 5 India and Pakistan are locked in disinformation warfareFalse reports and doctored images are circulating online. (The Guardian)+ Fact checkers are working around the clock to debunk fake news. (Reuters) 6 How North Korea is infiltrating remote jobs in the USWith the help of regular Americans. (WSJ $) 7 This Discord community is creating its own hair-growth drugsMen are going to extreme lengths to reverse their hair loss. (404 Media) 8 Inside YouTube’s quest to dominate your living room It wants to move away from controversial clips and into prestige TV. (Bloomberg $) 9 Sergey Brin threatens AI models with physical violenceThe Google co-founder insists that it produces better results. (The Register) 10 It must be nice to be a moving day influencer They reap all of the benefits, with none of the stress. (NY Mag $) Quote of the day “I studied in the US because I loved what America is about: it’s open, inclusive and diverse. Now my students and I feel slapped in the face by Trump’s policy.” —Cathy Tu, a Chinese AI researcher, tells the Washington Post why many of her students are already applying to universities outside the US after the Trump administration announced a crackdown on visas for Chinese students. One more thing The second wave of AI coding is hereAsk people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story. —Will Douglas Heaven We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’ve ever dreamed of owning a piece of cinematic history, more than 400 of David Lynch’s personal items are going up for auction.+ How accurate are those Hollywood films based on true stories? Let’s find out.+ Rest in peace Chicago Mike: the legendary hype man to Kool & the Gang.+ How to fully trust in one another.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • A new sodium metal fuel cell could help clean up transportation

    A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. 

    The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.  

    The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air.

    One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up.

    When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat. 

    Ultimately, Form chose a more traditional battery concept, but the idea stuck with Chiang, who decided to explore it with other metals and landed on the idea of a sodium-based fuel cell. 

    In this fuel cell format, the device takes in chemicals and runs reactions that generate electricity, after which the products get removed. Then fresh fuel is put in to run the whole thing again—no electrical charging required.Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping.

    Sodium metal could be used to power regional planes or short distance shipping.GRETCHEN ERTL/MITTR

    The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C, the cells operated at moderate temperatures of between 110 °C and 130 °C, which could be practical for use on planes or ships, Chiang says. 

    From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram. That’s much higher than what commercial lithium-ion batteries can reach today. Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures.

    “It’s an interesting cell concept,” says Jürgen Janek, a professor at the Institute of Physical Chemistry at the University of Giessen in Germany, who was not involved in the research. There’s been previous research on sodium-air batteries in the past, Janek says, but using this sort of chemistry in a fuel cell instead is new.

    “One of the critical issues with this type of cell concept is the safety issue,” Janek says. Sodium metal reacts very strongly with water.. Asked about this issue, Chiang says the design of the cell ensures that water produced during reactions is continuously removed, so there’s not enough around to fuel harmful reactions. The solid electrolyte, a ceramic material, also helps prevent reactions between water and sodium, Chiang adds. 

    Another question is what happens to one of the cell’s products, sodium hydroxide. Commonly known as lye, it’s an industrial chemical, used in products like liquid drain-cleaning solution. One of the researchers’ suggestions is to dilute the product and release it into the atmosphere or ocean, where it would react with carbon dioxide, capturing it in a stable form and preventing it from contributing to global warming. There are groups pursuing field trials using this exact chemical for ocean-based carbon removal, though some have been met with controversy. The researchers also laid out the potential for a closed system, where the chemical could be collected and sold as a by-product.

    There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride, which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says.

    Chiang has cofounded a company called Propel Aero to commercialize the research. The project received funding from ARPA-E’s Propel-1K program, which aims to develop new forms of high-power energy storage for aircraft, trains, and ships.

    The next step is to continue research to improve the cells’ performance and energy density, and to start designing small-scale systems. One potential early application is drones. “We’d like to make something fly within the next year,” Chiang says.

    “If people don’t find it crazy, I’ll be rather disappointed,” Chiang says. “Because if an idea doesn’t sound crazy at the beginning, it probably isn’t as revolutionary as you think. Fortunately, most people think I’m crazy on this one.”
    #new #sodium #metal #fuel #cell
    A new sodium metal fuel cell could help clean up transportation
    A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems.  The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.   The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air. One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up. When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat.  Ultimately, Form chose a more traditional battery concept, but the idea stuck with Chiang, who decided to explore it with other metals and landed on the idea of a sodium-based fuel cell.  In this fuel cell format, the device takes in chemicals and runs reactions that generate electricity, after which the products get removed. Then fresh fuel is put in to run the whole thing again—no electrical charging required.Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping. Sodium metal could be used to power regional planes or short distance shipping.GRETCHEN ERTL/MITTR The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C, the cells operated at moderate temperatures of between 110 °C and 130 °C, which could be practical for use on planes or ships, Chiang says.  From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram. That’s much higher than what commercial lithium-ion batteries can reach today. Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures. “It’s an interesting cell concept,” says Jürgen Janek, a professor at the Institute of Physical Chemistry at the University of Giessen in Germany, who was not involved in the research. There’s been previous research on sodium-air batteries in the past, Janek says, but using this sort of chemistry in a fuel cell instead is new. “One of the critical issues with this type of cell concept is the safety issue,” Janek says. Sodium metal reacts very strongly with water.. Asked about this issue, Chiang says the design of the cell ensures that water produced during reactions is continuously removed, so there’s not enough around to fuel harmful reactions. The solid electrolyte, a ceramic material, also helps prevent reactions between water and sodium, Chiang adds.  Another question is what happens to one of the cell’s products, sodium hydroxide. Commonly known as lye, it’s an industrial chemical, used in products like liquid drain-cleaning solution. One of the researchers’ suggestions is to dilute the product and release it into the atmosphere or ocean, where it would react with carbon dioxide, capturing it in a stable form and preventing it from contributing to global warming. There are groups pursuing field trials using this exact chemical for ocean-based carbon removal, though some have been met with controversy. The researchers also laid out the potential for a closed system, where the chemical could be collected and sold as a by-product. There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride, which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says. Chiang has cofounded a company called Propel Aero to commercialize the research. The project received funding from ARPA-E’s Propel-1K program, which aims to develop new forms of high-power energy storage for aircraft, trains, and ships. The next step is to continue research to improve the cells’ performance and energy density, and to start designing small-scale systems. One potential early application is drones. “We’d like to make something fly within the next year,” Chiang says. “If people don’t find it crazy, I’ll be rather disappointed,” Chiang says. “Because if an idea doesn’t sound crazy at the beginning, it probably isn’t as revolutionary as you think. Fortunately, most people think I’m crazy on this one.” #new #sodium #metal #fuel #cell
    WWW.TECHNOLOGYREVIEW.COM
    A new sodium metal fuel cell could help clean up transportation
    A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems.  The sodium-air fuel cell was designed by a team led by Yet-Ming Chiang, a professor of materials science and engineering at MIT. It has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. “I’m interested in sodium metal as an energy carrier of the future,” Chiang says.   The device’s design, published today in Joule, is related to the technology behind one of Chiang’s companies, Form Energy, which is building iron-air batteries for large energy storage installations like those that could help store wind and solar power on the grid. Form’s batteries rely on water, iron, and air. One technical challenge for metal-air batteries has historically been reversibility. A battery’s chemical reactions must be easily reversed so that in one direction they generate electricity, discharging the battery, and in the other electricity goes into the cell and the reverse reactions happen, charging it up. When a battery’s reactions produce a very stable product, it can be difficult to recharge the battery without losing capacity. To get around this problem, the team at Form had discussions about whether their batteries could be refuelable rather than rechargeable, Chiang says. The idea was that rather than reversing the reactions, they could simply run the system in one direction, add more starting material, and repeat.  Ultimately, Form chose a more traditional battery concept, but the idea stuck with Chiang, who decided to explore it with other metals and landed on the idea of a sodium-based fuel cell.  In this fuel cell format, the device takes in chemicals and runs reactions that generate electricity, after which the products get removed. Then fresh fuel is put in to run the whole thing again—no electrical charging required. (You might recognize this concept from hydrogen fuel cell vehicles, like the Toyota Mirai.) Chiang and his colleagues set out to build a fuel cell that runs on liquid sodium, which could have a much higher energy density than existing commercial technologies, so it would be small and light enough to be used for things like regional airplanes or short-distance shipping. Sodium metal could be used to power regional planes or short distance shipping.GRETCHEN ERTL/MITTR The research team built small test cells to try out the concept and ran them to show that they could use the sodium-metal-based system to generate electricity. Since sodium becomes liquid at about 98 °C (208 °F), the cells operated at moderate temperatures of between 110 °C and 130 °C (or 230 °F and 266°F), which could be practical for use on planes or ships, Chiang says.  From their work with these experimental devices, the researchers estimated that the energy density was about 1,200 watt-hours per kilogram (Wh/kg). That’s much higher than what commercial lithium-ion batteries can reach today (around 300 Wh/kg). Hydrogen fuel cells can achieve high energy density, but that requires the hydrogen to be stored at high pressures and often ultra-low temperatures. “It’s an interesting cell concept,” says Jürgen Janek, a professor at the Institute of Physical Chemistry at the University of Giessen in Germany, who was not involved in the research. There’s been previous research on sodium-air batteries in the past, Janek says, but using this sort of chemistry in a fuel cell instead is new. “One of the critical issues with this type of cell concept is the safety issue,” Janek says. Sodium metal reacts very strongly with water. (You may have seen videos where blocks of sodium metal get thrown into a lake, to dramatic effect). Asked about this issue, Chiang says the design of the cell ensures that water produced during reactions is continuously removed, so there’s not enough around to fuel harmful reactions. The solid electrolyte, a ceramic material, also helps prevent reactions between water and sodium, Chiang adds.  Another question is what happens to one of the cell’s products, sodium hydroxide. Commonly known as lye, it’s an industrial chemical, used in products like liquid drain-cleaning solution. One of the researchers’ suggestions is to dilute the product and release it into the atmosphere or ocean, where it would react with carbon dioxide, capturing it in a stable form and preventing it from contributing to global warming. There are groups pursuing field trials using this exact chemical for ocean-based carbon removal, though some have been met with controversy. The researchers also laid out the potential for a closed system, where the chemical could be collected and sold as a by-product. There are economic factors working in favor of sodium-based systems, though it would take some work to build up the necessary supply chains. Today, sodium metal isn’t produced at very high volumes. However, it can be made from sodium chloride (table salt), which is incredibly cheap. And it was produced more abundantly in the past, since it was used in the process of making leaded gasoline. So there’s a precedent for a larger supply chain, and it’s possible that scaling up production of sodium metal would make it cheap enough to use in fuel cell systems, Chiang says. Chiang has cofounded a company called Propel Aero to commercialize the research. The project received funding from ARPA-E’s Propel-1K program, which aims to develop new forms of high-power energy storage for aircraft, trains, and ships. The next step is to continue research to improve the cells’ performance and energy density, and to start designing small-scale systems. One potential early application is drones. “We’d like to make something fly within the next year,” Chiang says. “If people don’t find it crazy, I’ll be rather disappointed,” Chiang says. “Because if an idea doesn’t sound crazy at the beginning, it probably isn’t as revolutionary as you think. Fortunately, most people think I’m crazy on this one.”
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The FDA plans to limit access to covid vaccines. Here’s why that’s not all bad.

    This week, two new leaders at the US Food and Drug Administration announced plans to limit access to covid vaccines, arguing that there is not much evidence to support the value of annual shots in healthy people. New vaccines will be made available only to the people who are most vulnerable—namely, those over 65 and others with conditions that make them more susceptible to severe disease.

    Anyone else will have to wait. Covid vaccines will soon be required to go through more rigorous trials to ensure that they really are beneficial for people who aren’t at high risk.

    The plans have been met with fear and anger in some quarters. But they weren’t all that shocking to me. In the UK, where I live, covid boosters have been offered only to vulnerable groups for a while now. And the immunologists I spoke to agree: The plans make sense.

    They are still controversial. Covid hasn’t gone away. And while most people are thought to have some level of immunity to the virus, some of us still stand to get very sick if infected. The threat of long covid lingers, too. Given that people respond differently to both the virus and the vaccine, perhaps individuals should be able to choose whether they get a vaccine or not.

    I should start by saying that covid vaccines have been a remarkable success story. The drugs were developed at record-breaking speed—they were given to people in clinical trials just 69 days after the virus had been identified. They are, on the whole, very safe. And they work remarkably well. They have saved millions of lives. And they rescued many of us from lockdowns.

    But while many of us have benefited hugely from covid vaccinations in the past, there are questions over how useful continuing annual booster doses might be. That’s the argument being made by FDA head Marty Makary and Vinay Prasad, director of the agency’s Center for Biologics Evaluation and Research.

    Both men have been critical of the FDA in the past. Makary has long been accused of downplaying the benefits of covid vaccines. He made incorrect assumptions about the coronavirus responsible for covid-19 and predicted that the disease would be “mostly gone” by April 2021. Most recently, he also testified in Congress that the theory that the virus came from a lab in China was a “no-brainer.”Prasad has said “the FDA is a failure” and has called annual covid boosters “a public health disaster the likes of which we’ve never seen before,” because of a perceived lack of clinical evidence to support their use.

    Makary and Prasad’s plans, which were outlined in the New England Journal of Medicine on Tuesday, don’t include such inflammatory language or unfounded claims, thankfully. In fact, they seem pretty measured: Annual covid booster shots will continue to be approved for vulnerable people but will have to be shown to benefit others before people outside the approved groups can access them.

    There are still concerns being raised, though. Let’s address a few of the biggest ones.

    Shouldn’t I get an annual covid booster alongside my flu vaccine?

    At the moment, a lot of people in the US opt to get a covid vaccination around the time they get their annual flu jab. Each year, a flu vaccine is developed to protect against what scientists predict will be the dominant strain of virus circulating come flu season, which tends to run from October through March.

    But covid doesn’t seem to stick to the same seasonal patterns, says Susanna Dunachie, a clinical doctor and professor of infectious diseases at the University of Oxford in the UK. “We seem to be getting waves of covid year-round,” she says.

    And an annual shot might not offer the best protection against covid anyway, says Fikadu Tafesse, an immunologist and virologist at Oregon Health & Science University in Portland. His own research suggests that leaving more than a year between booster doses could enhance their effectiveness. “One year is really a random time,” he says. It might be better to wait five or 10 years between doses instead, he adds.

    “If you are at riskyou may actually needevery six months,” says Tafesse. “But for healthy individuals, it’s a very different conversation.”

    What about children—shouldn’t we be protecting them?

    There are reports that pediatricians are concerned about the impact on children, some of whom can develop serious cases of covid. “If we have safe and effective vaccines that prevent illness, we think they should be available,” James Campbell, vice chair of the committee on infectious diseases at the American Academy of Pediatrics, told STAT.

    This question has been on my mind for a while. My two young children, who were born in the UK, have never been eligible for a covid vaccine in this country. I found this incredibly distressing when the virus started tearing through child-care centers—especially given that at the time, the US was vaccinating babies from the age of six months.

    My kids were eventually offered a vaccine in the US, when we temporarily moved there a couple of years ago. But by that point, the equation had changed. They’d both had covid by then. I had a better idea of the general risks of the virus to children. I turned it down.

    I was relieved to hear that Tafesse had made the same decision for his own children. “There are always exceptions, but in general,is not severe in kids,” he says. The UK’s Joint Committee on Vaccination and Immunology found that the benefits of vaccination are much smaller for children than they are for adults.

    “Of course there are children with health problems who should definitely have it,” says Dunachie. “But for healthy children in healthy households, the benefits probably are quite marginal.”

    Shouldn’t healthy people get vaccinated to help protect more vulnerable members of society?

    It’s a good argument, says Tafesse. Research suggests that people who are vaccinated against covid-19 are less likely to end up transmitting the infection to the people around them. The degree of protection is not entirely clear, particularly with less-studied—and more contagious—variants of the virus and targeted vaccines. The safest approach is to encourage those at high risk to get the vaccine themselves, says Tafesse.

    If the vaccines are safe, shouldn’t I be able to choose to get one?

    Tafesse doesn’t buy this argument. “I know they are safe, but even if they’re safe, why do I need to get one?” People should know if they are likely to benefit from a drug they are taking, he says.

    Having said that, the cost-benefit calculation will differ between individuals. Even a “mild” covid infection can leave some people bed-bound for a week. For them, it might make total sense to get the vaccine.

    Dunachie thinks people should be able to make their own decisions. “Giving people a top-up whether they need it or not is a safe thing to do,” she says.

    It is still not entirely clear who will be able to access covid vaccinations under the new plans, and how. Makary and Prasad’s piece includes a list of “medical conditions that increase a person’s risk of severe covid-19,” which includes several disorders, pregnancy, and “physical inactivity.” It covers a lot of people; research suggests that around 25% of Americans are physically inactive.

    But I find myself agreeing with Dunachie. Yes, we need up-to-date evidence to support the use of any drugs. But taking vaccines away from people who have experience with them and feel they could benefit from them doesn’t feel like the ideal way to go about it.

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
    #fda #plans #limit #access #covid
    The FDA plans to limit access to covid vaccines. Here’s why that’s not all bad.
    This week, two new leaders at the US Food and Drug Administration announced plans to limit access to covid vaccines, arguing that there is not much evidence to support the value of annual shots in healthy people. New vaccines will be made available only to the people who are most vulnerable—namely, those over 65 and others with conditions that make them more susceptible to severe disease. Anyone else will have to wait. Covid vaccines will soon be required to go through more rigorous trials to ensure that they really are beneficial for people who aren’t at high risk. The plans have been met with fear and anger in some quarters. But they weren’t all that shocking to me. In the UK, where I live, covid boosters have been offered only to vulnerable groups for a while now. And the immunologists I spoke to agree: The plans make sense. They are still controversial. Covid hasn’t gone away. And while most people are thought to have some level of immunity to the virus, some of us still stand to get very sick if infected. The threat of long covid lingers, too. Given that people respond differently to both the virus and the vaccine, perhaps individuals should be able to choose whether they get a vaccine or not. I should start by saying that covid vaccines have been a remarkable success story. The drugs were developed at record-breaking speed—they were given to people in clinical trials just 69 days after the virus had been identified. They are, on the whole, very safe. And they work remarkably well. They have saved millions of lives. And they rescued many of us from lockdowns. But while many of us have benefited hugely from covid vaccinations in the past, there are questions over how useful continuing annual booster doses might be. That’s the argument being made by FDA head Marty Makary and Vinay Prasad, director of the agency’s Center for Biologics Evaluation and Research. Both men have been critical of the FDA in the past. Makary has long been accused of downplaying the benefits of covid vaccines. He made incorrect assumptions about the coronavirus responsible for covid-19 and predicted that the disease would be “mostly gone” by April 2021. Most recently, he also testified in Congress that the theory that the virus came from a lab in China was a “no-brainer.”Prasad has said “the FDA is a failure” and has called annual covid boosters “a public health disaster the likes of which we’ve never seen before,” because of a perceived lack of clinical evidence to support their use. Makary and Prasad’s plans, which were outlined in the New England Journal of Medicine on Tuesday, don’t include such inflammatory language or unfounded claims, thankfully. In fact, they seem pretty measured: Annual covid booster shots will continue to be approved for vulnerable people but will have to be shown to benefit others before people outside the approved groups can access them. There are still concerns being raised, though. Let’s address a few of the biggest ones. Shouldn’t I get an annual covid booster alongside my flu vaccine? At the moment, a lot of people in the US opt to get a covid vaccination around the time they get their annual flu jab. Each year, a flu vaccine is developed to protect against what scientists predict will be the dominant strain of virus circulating come flu season, which tends to run from October through March. But covid doesn’t seem to stick to the same seasonal patterns, says Susanna Dunachie, a clinical doctor and professor of infectious diseases at the University of Oxford in the UK. “We seem to be getting waves of covid year-round,” she says. And an annual shot might not offer the best protection against covid anyway, says Fikadu Tafesse, an immunologist and virologist at Oregon Health & Science University in Portland. His own research suggests that leaving more than a year between booster doses could enhance their effectiveness. “One year is really a random time,” he says. It might be better to wait five or 10 years between doses instead, he adds. “If you are at riskyou may actually needevery six months,” says Tafesse. “But for healthy individuals, it’s a very different conversation.” What about children—shouldn’t we be protecting them? There are reports that pediatricians are concerned about the impact on children, some of whom can develop serious cases of covid. “If we have safe and effective vaccines that prevent illness, we think they should be available,” James Campbell, vice chair of the committee on infectious diseases at the American Academy of Pediatrics, told STAT. This question has been on my mind for a while. My two young children, who were born in the UK, have never been eligible for a covid vaccine in this country. I found this incredibly distressing when the virus started tearing through child-care centers—especially given that at the time, the US was vaccinating babies from the age of six months. My kids were eventually offered a vaccine in the US, when we temporarily moved there a couple of years ago. But by that point, the equation had changed. They’d both had covid by then. I had a better idea of the general risks of the virus to children. I turned it down. I was relieved to hear that Tafesse had made the same decision for his own children. “There are always exceptions, but in general,is not severe in kids,” he says. The UK’s Joint Committee on Vaccination and Immunology found that the benefits of vaccination are much smaller for children than they are for adults. “Of course there are children with health problems who should definitely have it,” says Dunachie. “But for healthy children in healthy households, the benefits probably are quite marginal.” Shouldn’t healthy people get vaccinated to help protect more vulnerable members of society? It’s a good argument, says Tafesse. Research suggests that people who are vaccinated against covid-19 are less likely to end up transmitting the infection to the people around them. The degree of protection is not entirely clear, particularly with less-studied—and more contagious—variants of the virus and targeted vaccines. The safest approach is to encourage those at high risk to get the vaccine themselves, says Tafesse. If the vaccines are safe, shouldn’t I be able to choose to get one? Tafesse doesn’t buy this argument. “I know they are safe, but even if they’re safe, why do I need to get one?” People should know if they are likely to benefit from a drug they are taking, he says. Having said that, the cost-benefit calculation will differ between individuals. Even a “mild” covid infection can leave some people bed-bound for a week. For them, it might make total sense to get the vaccine. Dunachie thinks people should be able to make their own decisions. “Giving people a top-up whether they need it or not is a safe thing to do,” she says. It is still not entirely clear who will be able to access covid vaccinations under the new plans, and how. Makary and Prasad’s piece includes a list of “medical conditions that increase a person’s risk of severe covid-19,” which includes several disorders, pregnancy, and “physical inactivity.” It covers a lot of people; research suggests that around 25% of Americans are physically inactive. But I find myself agreeing with Dunachie. Yes, we need up-to-date evidence to support the use of any drugs. But taking vaccines away from people who have experience with them and feel they could benefit from them doesn’t feel like the ideal way to go about it. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. #fda #plans #limit #access #covid
    WWW.TECHNOLOGYREVIEW.COM
    The FDA plans to limit access to covid vaccines. Here’s why that’s not all bad.
    This week, two new leaders at the US Food and Drug Administration announced plans to limit access to covid vaccines, arguing that there is not much evidence to support the value of annual shots in healthy people. New vaccines will be made available only to the people who are most vulnerable—namely, those over 65 and others with conditions that make them more susceptible to severe disease. Anyone else will have to wait. Covid vaccines will soon be required to go through more rigorous trials to ensure that they really are beneficial for people who aren’t at high risk. The plans have been met with fear and anger in some quarters. But they weren’t all that shocking to me. In the UK, where I live, covid boosters have been offered only to vulnerable groups for a while now. And the immunologists I spoke to agree: The plans make sense. They are still controversial. Covid hasn’t gone away. And while most people are thought to have some level of immunity to the virus, some of us still stand to get very sick if infected. The threat of long covid lingers, too. Given that people respond differently to both the virus and the vaccine, perhaps individuals should be able to choose whether they get a vaccine or not. I should start by saying that covid vaccines have been a remarkable success story. The drugs were developed at record-breaking speed—they were given to people in clinical trials just 69 days after the virus had been identified. They are, on the whole, very safe. And they work remarkably well. They have saved millions of lives. And they rescued many of us from lockdowns. But while many of us have benefited hugely from covid vaccinations in the past, there are questions over how useful continuing annual booster doses might be. That’s the argument being made by FDA head Marty Makary and Vinay Prasad, director of the agency’s Center for Biologics Evaluation and Research. Both men have been critical of the FDA in the past. Makary has long been accused of downplaying the benefits of covid vaccines. He made incorrect assumptions about the coronavirus responsible for covid-19 and predicted that the disease would be “mostly gone” by April 2021. Most recently, he also testified in Congress that the theory that the virus came from a lab in China was a “no-brainer.” (The strongest evidence suggests the virus jumped from animals to humans in a market in Wuhan.) Prasad has said “the FDA is a failure” and has called annual covid boosters “a public health disaster the likes of which we’ve never seen before,” because of a perceived lack of clinical evidence to support their use. Makary and Prasad’s plans, which were outlined in the New England Journal of Medicine on Tuesday, don’t include such inflammatory language or unfounded claims, thankfully. In fact, they seem pretty measured: Annual covid booster shots will continue to be approved for vulnerable people but will have to be shown to benefit others before people outside the approved groups can access them. There are still concerns being raised, though. Let’s address a few of the biggest ones. Shouldn’t I get an annual covid booster alongside my flu vaccine? At the moment, a lot of people in the US opt to get a covid vaccination around the time they get their annual flu jab. Each year, a flu vaccine is developed to protect against what scientists predict will be the dominant strain of virus circulating come flu season, which tends to run from October through March. But covid doesn’t seem to stick to the same seasonal patterns, says Susanna Dunachie, a clinical doctor and professor of infectious diseases at the University of Oxford in the UK. “We seem to be getting waves of covid year-round,” she says. And an annual shot might not offer the best protection against covid anyway, says Fikadu Tafesse, an immunologist and virologist at Oregon Health & Science University in Portland. His own research suggests that leaving more than a year between booster doses could enhance their effectiveness. “One year is really a random time,” he says. It might be better to wait five or 10 years between doses instead, he adds. “If you are at risk [of a serious covid infection] you may actually need [a dose] every six months,” says Tafesse. “But for healthy individuals, it’s a very different conversation.” What about children—shouldn’t we be protecting them? There are reports that pediatricians are concerned about the impact on children, some of whom can develop serious cases of covid. “If we have safe and effective vaccines that prevent illness, we think they should be available,” James Campbell, vice chair of the committee on infectious diseases at the American Academy of Pediatrics, told STAT. This question has been on my mind for a while. My two young children, who were born in the UK, have never been eligible for a covid vaccine in this country. I found this incredibly distressing when the virus started tearing through child-care centers—especially given that at the time, the US was vaccinating babies from the age of six months. My kids were eventually offered a vaccine in the US, when we temporarily moved there a couple of years ago. But by that point, the equation had changed. They’d both had covid by then. I had a better idea of the general risks of the virus to children. I turned it down. I was relieved to hear that Tafesse had made the same decision for his own children. “There are always exceptions, but in general, [covid] is not severe in kids,” he says. The UK’s Joint Committee on Vaccination and Immunology found that the benefits of vaccination are much smaller for children than they are for adults. “Of course there are children with health problems who should definitely have it,” says Dunachie. “But for healthy children in healthy households, the benefits probably are quite marginal.” Shouldn’t healthy people get vaccinated to help protect more vulnerable members of society? It’s a good argument, says Tafesse. Research suggests that people who are vaccinated against covid-19 are less likely to end up transmitting the infection to the people around them. The degree of protection is not entirely clear, particularly with less-studied—and more contagious—variants of the virus and targeted vaccines. The safest approach is to encourage those at high risk to get the vaccine themselves, says Tafesse. If the vaccines are safe, shouldn’t I be able to choose to get one? Tafesse doesn’t buy this argument. “I know they are safe, but even if they’re safe, why do I need to get one?” People should know if they are likely to benefit from a drug they are taking, he says. Having said that, the cost-benefit calculation will differ between individuals. Even a “mild” covid infection can leave some people bed-bound for a week. For them, it might make total sense to get the vaccine. Dunachie thinks people should be able to make their own decisions. “Giving people a top-up whether they need it or not is a safe thing to do,” she says. It is still not entirely clear who will be able to access covid vaccinations under the new plans, and how. Makary and Prasad’s piece includes a list of “medical conditions that increase a person’s risk of severe covid-19,” which includes several disorders, pregnancy, and “physical inactivity.” It covers a lot of people; research suggests that around 25% of Americans are physically inactive. But I find myself agreeing with Dunachie. Yes, we need up-to-date evidence to support the use of any drugs. But taking vaccines away from people who have experience with them and feel they could benefit from them doesn’t feel like the ideal way to go about it. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
    0 Commentarii 0 Distribuiri 0 previzualizare
Mai multe povesti
CGShares https://cgshares.com