Tech billionaires are making a risky bet with humanityâs future
âThe best way to predict the future is to invent it,â the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanityâs future.Â
Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the worldâs most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-Âsustaining colony on Mars; and, ultimately, spreading out across the cosmos.
While thereâs a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valleyâs Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the âideology of technological salvationâ and warns that tech titans are using it to steer humanity in a dangerous direction.Â
âIn most of these isms youâll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wondersâso long as we donât get in the way of technological progress.â
âThe credence that tech billionaires give to these specific science-fictional futures validates their pursuit of moreâto portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,â he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.Â
A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the âCalifornian Ideologyâ in the mid-â90s, the âMove fast and break thingsâ era of the early 2000s, and more recently the âLibertarianism for me, feudalism for theeâ or âtechno-Âauthoritarianâ views. How do you see the âideology of technological salvationâ fitting in?Â
Iâd say itâs very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max Moreâs principles of transhumanism in the â90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a Âmysteryâlibertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.Â
What can be difficult is to parse where all these ideas come from and how they fit togetherâor if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.Â
Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. Iâm guessing theyâll be less familiar with the various âismsâ that you argue have influenced or guided their thinking. Effective altruism, rationalism, longÂtermism, extropianism, effective accelerationism, futurism, singularitarianism, Âtranshumanismâthere are a lot of them. Is there something that they all share?Â
Theyâre definitely connected. In a sense, you could say theyâre all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late â80s believed in self-Âtransformation through technology and freedom from limitations of any kindâideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.Â
In most of these isms youâll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wondersâso long as we donât get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Ămile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics.
You argue that the Singularity is the purest expression of the ideology of technological salvation. How so?
Well, for one thing, itâs just this very simple, straightforward ideaâthe Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, itâs a fantastical vision of a perfect technological utopia. Weâre all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end.
The other isms I talk about in the book have a little more ⊠heft isnât the right wordâthey just have more stuff going on. Thereâs more to them, right? The rationalists and the effective altruists and the longtermistsâthey think that something like a singularity will happen, or could happen, but that thereâs this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanityâthe so-called alignment problemâbefore any singularity can happen.Â
Then youâve got the effective accelerationists, who are more like Kurzweil, but theyâve got more of a tech-bro spin on things. Theyâve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessenâs âTechno-Optimist Manifestoâis a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweilâs Singularity, each one building on top of the core ideas of transcendence, technoÂ-optimism, and exponential growth.Â
Early on in the book you take aim at that idea of exponential growthâspecifically, Kurzweilâs âLaw of Accelerating Returns.â Could you explain what that is and why you think itâs flawed?
Kurzweil thinks thereâs this immutable âLaw of Accelerating Returnsâ at work in the affairs of the universe, especially when it comes to technology. Itâs the idea that technological progress isnât linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, heâs far from the only one who believes in this so-called law.
âI really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you donât want to hear.â
My sense is that itâs an idea that comes from staring at Mooreâs Law for too long. Mooreâs Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. Itâs because the tech industry made a choice and some very sizable investments to make it happen. Mooreâs Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldnât and couldnât last forever. In fact, some think itâs already over.Â
These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessenâs âTechno-Optimist Manifestoâ name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins?
Youâre assuming in the framing of that question that thereâs any rigorous thought going on here at all. As I say in the book, Andreessenâs manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didnât care.Â
I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you donât want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because theyâre fundamentally about creating a fantasy of control.Â
You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who arenât billionaires. Why do you think that is?Â
I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires areâthey offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. Itâs hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I donât think itâs an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals.
More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitabilityâthat their visions for the future are somehow predestined. How does one fight against that?
Itâs a difficult question. For me, the answer was to write this book. I guess Iâd also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. Thatâs definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think youâll see that senseof inevitability vanish pretty fast.Â
This interview was edited for length and clarity.
Bryan Gardiner is a writer based in Oakland, California.Â
#tech #billionaires #are #making #risky
Tech billionaires are making a risky bet with humanityâs future
âThe best way to predict the future is to invent it,â the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanityâs future.Â
Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the worldâs most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-Âsustaining colony on Mars; and, ultimately, spreading out across the cosmos.
While thereâs a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valleyâs Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the âideology of technological salvationâ and warns that tech titans are using it to steer humanity in a dangerous direction.Â
âIn most of these isms youâll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wondersâso long as we donât get in the way of technological progress.â
âThe credence that tech billionaires give to these specific science-fictional futures validates their pursuit of moreâto portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,â he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.Â
A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the âCalifornian Ideologyâ in the mid-â90s, the âMove fast and break thingsâ era of the early 2000s, and more recently the âLibertarianism for me, feudalism for theeâ or âtechno-Âauthoritarianâ views. How do you see the âideology of technological salvationâ fitting in?Â
Iâd say itâs very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max Moreâs principles of transhumanism in the â90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a Âmysteryâlibertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.Â
What can be difficult is to parse where all these ideas come from and how they fit togetherâor if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.Â
Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. Iâm guessing theyâll be less familiar with the various âismsâ that you argue have influenced or guided their thinking. Effective altruism, rationalism, longÂtermism, extropianism, effective accelerationism, futurism, singularitarianism, Âtranshumanismâthere are a lot of them. Is there something that they all share?Â
Theyâre definitely connected. In a sense, you could say theyâre all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late â80s believed in self-Âtransformation through technology and freedom from limitations of any kindâideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.Â
In most of these isms youâll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wondersâso long as we donât get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Ămile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics.
You argue that the Singularity is the purest expression of the ideology of technological salvation. How so?
Well, for one thing, itâs just this very simple, straightforward ideaâthe Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, itâs a fantastical vision of a perfect technological utopia. Weâre all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end.
The other isms I talk about in the book have a little more ⊠heft isnât the right wordâthey just have more stuff going on. Thereâs more to them, right? The rationalists and the effective altruists and the longtermistsâthey think that something like a singularity will happen, or could happen, but that thereâs this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanityâthe so-called alignment problemâbefore any singularity can happen.Â
Then youâve got the effective accelerationists, who are more like Kurzweil, but theyâve got more of a tech-bro spin on things. Theyâve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessenâs âTechno-Optimist Manifestoâis a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweilâs Singularity, each one building on top of the core ideas of transcendence, technoÂ-optimism, and exponential growth.Â
Early on in the book you take aim at that idea of exponential growthâspecifically, Kurzweilâs âLaw of Accelerating Returns.â Could you explain what that is and why you think itâs flawed?
Kurzweil thinks thereâs this immutable âLaw of Accelerating Returnsâ at work in the affairs of the universe, especially when it comes to technology. Itâs the idea that technological progress isnât linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, heâs far from the only one who believes in this so-called law.
âI really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you donât want to hear.â
My sense is that itâs an idea that comes from staring at Mooreâs Law for too long. Mooreâs Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. Itâs because the tech industry made a choice and some very sizable investments to make it happen. Mooreâs Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldnât and couldnât last forever. In fact, some think itâs already over.Â
These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessenâs âTechno-Optimist Manifestoâ name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins?
Youâre assuming in the framing of that question that thereâs any rigorous thought going on here at all. As I say in the book, Andreessenâs manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didnât care.Â
I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you donât want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because theyâre fundamentally about creating a fantasy of control.Â
You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who arenât billionaires. Why do you think that is?Â
I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires areâthey offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. Itâs hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I donât think itâs an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals.
More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitabilityâthat their visions for the future are somehow predestined. How does one fight against that?
Itâs a difficult question. For me, the answer was to write this book. I guess Iâd also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. Thatâs definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think youâll see that senseof inevitability vanish pretty fast.Â
This interview was edited for length and clarity.
Bryan Gardiner is a writer based in Oakland, California.Â
#tech #billionaires #are #making #risky