• Hey everyone!

    Today, I want to dive into something truly fascinating and groundbreaking that’s making waves in the tech world: **superintelligence**! The recent news about Meta's investment in Scale AI and their ambitious plans to create a superintelligence AI research lab is incredibly exciting! It’s a glimpse into the future that we are all a part of, and I can't help but feel inspired by the possibilities!

    So, what exactly is superintelligence? In essence, it refers to a form of artificial intelligence that surpasses human intelligence in virtually every aspect. Imagine machines that can think, learn, and adapt at an unprecedented level! The potential for positive change and innovation is enormous! Just think about how this technology could transform industries, solve complex problems, and even improve our everyday lives!

    Meta is taking a bold step by investing in this field, and it shows just how serious they are about shaping our future. Every great leap in technology starts with a vision, and their commitment to building a superintelligence AI research lab is a clear indication that they believe in a brighter tomorrow. Just imagine the breakthroughs that could come from this initiative! From healthcare advancements to tackling climate change, the opportunities are limitless!

    What I find truly inspiring is how this move encourages collaboration among brilliant minds across the globe. The quest for superintelligence is not just about creating smart machines; it’s about bringing together diverse perspectives, ideas, and skills to push the boundaries of what’s possible! Let’s celebrate this spirit of innovation and teamwork!

    And here’s the most exciting part: You don’t have to be a tech expert to be a part of this journey! Every one of us has the ability to contribute to the conversation around AI and its impact on our lives. Whether you’re an artist, a scientist, an entrepreneur, or a student, your voice matters! Let’s dream big and think about how we can leverage technology to create a better world for everyone!

    As we move forward, let’s keep the dialogue open and embrace the changes that superintelligence might bring. Together, we can shape a future that harnesses AI in a way that uplifts humanity and makes our lives richer and more fulfilling! So, let’s stay positive, curious, and engaged! The future is bright, and it’s ours to create!

    Stay tuned for more updates, and let’s keep this conversation going! What are your thoughts on superintelligence? How do you envision it impacting our world? Share your ideas below!

    #Superintelligence #Meta #AIResearch #Innovation #FutureTech
    🌟 Hey everyone! 🌟 Today, I want to dive into something truly fascinating and groundbreaking that’s making waves in the tech world: **superintelligence**! 🤖✨ The recent news about Meta's investment in Scale AI and their ambitious plans to create a superintelligence AI research lab is incredibly exciting! It’s a glimpse into the future that we are all a part of, and I can't help but feel inspired by the possibilities! 🚀 So, what exactly is superintelligence? 🤔 In essence, it refers to a form of artificial intelligence that surpasses human intelligence in virtually every aspect. Imagine machines that can think, learn, and adapt at an unprecedented level! The potential for positive change and innovation is enormous! 🌈 Just think about how this technology could transform industries, solve complex problems, and even improve our everyday lives! 🌍💡 Meta is taking a bold step by investing in this field, and it shows just how serious they are about shaping our future. Every great leap in technology starts with a vision, and their commitment to building a superintelligence AI research lab is a clear indication that they believe in a brighter tomorrow. 🌞 Just imagine the breakthroughs that could come from this initiative! From healthcare advancements to tackling climate change, the opportunities are limitless! 🌿❤️ What I find truly inspiring is how this move encourages collaboration among brilliant minds across the globe. The quest for superintelligence is not just about creating smart machines; it’s about bringing together diverse perspectives, ideas, and skills to push the boundaries of what’s possible! Let’s celebrate this spirit of innovation and teamwork! 🙌✨ And here’s the most exciting part: You don’t have to be a tech expert to be a part of this journey! Every one of us has the ability to contribute to the conversation around AI and its impact on our lives. Whether you’re an artist, a scientist, an entrepreneur, or a student, your voice matters! 🎨🔬💼 Let’s dream big and think about how we can leverage technology to create a better world for everyone! 🌍💖 As we move forward, let’s keep the dialogue open and embrace the changes that superintelligence might bring. Together, we can shape a future that harnesses AI in a way that uplifts humanity and makes our lives richer and more fulfilling! So, let’s stay positive, curious, and engaged! The future is bright, and it’s ours to create! 🌟✨ Stay tuned for more updates, and let’s keep this conversation going! What are your thoughts on superintelligence? How do you envision it impacting our world? Share your ideas below! 💬👇 #Superintelligence #Meta #AIResearch #Innovation #FutureTech
    Seriously, What Is ‘Superintelligence’?
    In this episode of Uncanny Valley, we talk about Meta’s recent investment in Scale AI and its move to build a superintelligence AI research lab. So we ask: What is superintelligence anyway?
    Like
    Love
    Wow
    Sad
    Angry
    221
    1 Commentarii 0 Distribuiri
  • Tech billionaires are making a risky bet with humanity’s future

    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future. 

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.

    While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction. 

    “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.”

    “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow. 

    A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in? 

    I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization. 

    What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry. 

    Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share? 

    They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity. 

    In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics.

    You argue that the Singularity is the purest expression of the ideology of technological salvation. How so?

    Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end.

    The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen. 

    Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth. 

    Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed?

    Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law.

    “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.”

    My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over. 

    These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins?

    You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care. 

    I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control. 

    You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is? 

    I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals.

    More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that?

    It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast. 

    This interview was edited for length and clarity.

    Bryan Gardiner is a writer based in Oakland, California. 
    #tech #billionaires #are #making #risky
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California.  #tech #billionaires #are #making #risky
    WWW.TECHNOLOGYREVIEW.COM
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, [and] to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideology [a mashup of countercultural, libertarian, and neoliberal values] and through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto” [from 2023] is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Moore [who first articulated it] knew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California. 
    Like
    Love
    Wow
    Sad
    Angry
    535
    2 Commentarii 0 Distribuiri
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Commentarii 0 Distribuiri
  • Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'

    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024. NEWYou can now listen to Fox News articles!
    Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligencecapable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany.'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligencethat can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence, OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025. APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conferencekicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022. SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administrationto develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos.'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligenceand its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff.
    #fox #news #newsletter #hollywood #studios
    Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'
    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024. NEWYou can now listen to Fox News articles! Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligencecapable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany.'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligencethat can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence, OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025. APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conferencekicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022. SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administrationto develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos.'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligenceand its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff. #fox #news #newsletter #hollywood #studios
    WWW.FOXNEWS.COM
    Fox News AI Newsletter: Hollywood studios sue 'bottomless pit of plagiarism'
    The Minions pose during the world premiere of the film "Despicable Me 4" in New York City, June 9, 2024.  (REUTERS/Kena Betancur) NEWYou can now listen to Fox News articles! Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Major Hollywood studios sue AI company over copyright infringement in landmark move- Meta's Zuckerberg aiming to dominate AI race with recruiting push for new ‘superintelligence’ team: report- OpenAI says this state will play central role in artificial intelligence development The website of Midjourney, an artificial intelligence (AI) capable of creating AI art, is seen on a smartphone on April 3, 2023, in Berlin, Germany. (Thomas Trutschel/Photothek via Getty Images)'PIRACY IS PIRACY': Two major Hollywood studios are suing Midjourney, a popular AI image generator, over its use and distribution of intellectual property.AI RACE: Meta CEO Mark Zuckerberg is reportedly building a team of experts to develop artificial general intelligence (AGI) that can meet or exceed human capabilities.TECH HUB: New York is poised to play a central role in the development of artificial intelligence (AI), OpenAI executives told key business and civic leaders on Tuesday. Attendees watch a presentation during an event on the Apple campus in Cupertino, Calif., Monday, June 9, 2025.  (AP Photo/Jeff Chiu)APPLE FALLING BEHIND: Apple’s annual Worldwide Developers Conference (WWDC) kicked off on Monday and runs through Friday. But the Cupertino-based company is not making us wait until the end. The major announcements have already been made, and there are quite a few. The headliners are new software versions for Macs, iPhones, iPads and Vision. FROM COAL TO CODE: This week, Amazon announced a $20 billion investment in artificial intelligence infrastructure in the form of new data centers, the largest in the commonwealth's history, according to the eCommerce giant.DIGITAL DEFENSE: A growing number of fire departments across the country are turning to artificial intelligence to help detect and respond to wildfires more quickly. Rep. Darin LaHood, R-Ill., leaves the House Republican Conference meeting at the Capitol Hill Club in Washington on Tuesday, May 17, 2022.  (Bill Clark/CQ-Roll Call, Inc via Getty Images)SHIELD FROM BEIJING: Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administration (NSA) to develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. ROBOT RALLY PARTNER: Finding a reliable tennis partner who matches your energy and skill level can be a challenge. Now, with Tenniix, an artificial intelligence-powered tennis robot from T-Apex, players of all abilities have a new way to practice and improve. DIGITAL DANGER ZONE: Scam ads on Facebook have evolved beyond the days of misspelled headlines and sketchy product photos. Today, many are powered by artificial intelligence, fueled by deepfake technology and distributed at scale through Facebook’s own ad system.  Fairfield, Ohio, USA - February 25, 2011 : Chipotle Mexican Grill Logo on brick building. Chipotle is a chain of fast casual restaurants in the United States and Canada that specialize in burritos and tacos. (iStock)'EXPONENTIAL RATE': Artificial intelligence is helping Chipotle rapidly grow its footprint, according to CEO Scott Boatwright. AI TAKEOVER THREAT: The hottest topic nowadays revolves around Artificial Intelligence (AI) and its potential to rapidly and imminently transform the world we live in — economically, socially, politically and even defensively. Regardless of whether you believe that the technology will be able to develop superintelligence and lead a metamorphosis of everything, the possibility that may come to fruition is a catalyst for more far-leftist control.FOLLOW FOX NEWS ON SOCIAL MEDIASIGN UP FOR OUR OTHER NEWSLETTERSDOWNLOAD OUR APPSWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here. This article was written by Fox News staff.
    0 Commentarii 0 Distribuiri
  • Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind

    Meta is reportedly set to invest billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for,” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story.
    #metas #billion #scale #deal #could
    Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind
    Meta is reportedly set to invest billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for,” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story. #metas #billion #scale #deal #could
    TIME.COM
    Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind
    Meta is reportedly set to invest $15 billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.(TIME has a content partnership with Scale AI.)“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for [workers],” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story.
    0 Commentarii 0 Distribuiri
  • Google reportedly plans to cut ties with Scale AI

    In Brief

    Posted:
    11:46 AM PDT · June 14, 2025

    Image Credits:Matthias Balk/picture alliance / Getty Images

    Google reportedly plans to cut ties with Scale AI

    Meta’s big investment in Scale AI may be giving some of the startup’s customers pause.
    Reuters reports that Google had planned to pay Scale million this year but is now having conversations with its competitors and planning to cut ties. Microsoft is also reportedly looking to pull back, and OpenAI supposedly made a similar decision months ago, although its CFO said the company will continue working with Scale as one of many vendors.
    Scale’s customers include self-driving car companies and the U.S. government, but Reuters says its biggest clients are generative AI companies seeking access to workers with specialized knowledge who can annotate data to train models.
    Google declined to comment on the report. A Scale spokesperson declined to comment on the company’s relationship with Google, but he told TechCrunch that Scale’s business remains strong, and that it will continue to operate as an independent company that safeguards its customers’ data.
    Earlier reports suggest that Meta invested billion in Scale for a 49% stake in the company, with Scale CEO Alexandr Wang joining Meta to lead the company’s efforts to develop “superintelligence.”

    Topics

    AI, Google, Meta, Scale AI
    #google #reportedly #plans #cut #ties
    Google reportedly plans to cut ties with Scale AI
    In Brief Posted: 11:46 AM PDT · June 14, 2025 Image Credits:Matthias Balk/picture alliance / Getty Images Google reportedly plans to cut ties with Scale AI Meta’s big investment in Scale AI may be giving some of the startup’s customers pause. Reuters reports that Google had planned to pay Scale million this year but is now having conversations with its competitors and planning to cut ties. Microsoft is also reportedly looking to pull back, and OpenAI supposedly made a similar decision months ago, although its CFO said the company will continue working with Scale as one of many vendors. Scale’s customers include self-driving car companies and the U.S. government, but Reuters says its biggest clients are generative AI companies seeking access to workers with specialized knowledge who can annotate data to train models. Google declined to comment on the report. A Scale spokesperson declined to comment on the company’s relationship with Google, but he told TechCrunch that Scale’s business remains strong, and that it will continue to operate as an independent company that safeguards its customers’ data. Earlier reports suggest that Meta invested billion in Scale for a 49% stake in the company, with Scale CEO Alexandr Wang joining Meta to lead the company’s efforts to develop “superintelligence.” Topics AI, Google, Meta, Scale AI #google #reportedly #plans #cut #ties
    TECHCRUNCH.COM
    Google reportedly plans to cut ties with Scale AI
    In Brief Posted: 11:46 AM PDT · June 14, 2025 Image Credits:Matthias Balk/picture alliance / Getty Images Google reportedly plans to cut ties with Scale AI Meta’s big investment in Scale AI may be giving some of the startup’s customers pause. Reuters reports that Google had planned to pay Scale $200 million this year but is now having conversations with its competitors and planning to cut ties. Microsoft is also reportedly looking to pull back, and OpenAI supposedly made a similar decision months ago, although its CFO said the company will continue working with Scale as one of many vendors. Scale’s customers include self-driving car companies and the U.S. government, but Reuters says its biggest clients are generative AI companies seeking access to workers with specialized knowledge who can annotate data to train models. Google declined to comment on the report. A Scale spokesperson declined to comment on the company’s relationship with Google, but he told TechCrunch that Scale’s business remains strong, and that it will continue to operate as an independent company that safeguards its customers’ data. Earlier reports suggest that Meta invested $14.3 billion in Scale for a 49% stake in the company, with Scale CEO Alexandr Wang joining Meta to lead the company’s efforts to develop “superintelligence.” Topics AI, Google, Meta, Scale AI
    0 Commentarii 0 Distribuiri
  • The Download: gambling with humanity’s future, and the FDA under Trump

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story.

    —Bryan Gardiner

    This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

    Here’s what food and drug regulation might look like under the Trump administration

    Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them.

    Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI.

    —Jessica Hamzelou

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically. 

    3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day

    “It kind of jams two years of work into two months.”

    —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states.

    One more thing

    The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years.

    But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story.

    —David Rotman

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know.
    #download #gambling #with #humanitys #future
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically.  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know. #download #gambling #with #humanitys #future
    WWW.TECHNOLOGYREVIEW.COM
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates. (WP $)+ Its core component has been springing small air leaks for months. (Reuters)+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid. (Wired $) 2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA. (Wired $)+ Platforms’ relationships with protest activism has changed drastically. (NY Mag $)  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787. (Ars Technica)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models. (WSJ $)+ The US is cracking down on Huawei’s ability to produce chips. (Bloomberg $)+ What the US-China AI race overlooks. (Rest of World) 5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms. (NYT $)+ Here’s what we know about hurricanes and climate change. (MIT Technology Review) 6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?! (FT $)+ Nothing is safe from the creep of AI, not even playtime. (LA Times $)+ OpenAI has ambitions to reach billions of users. (Bloomberg $) 7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC. (404 Media)+ How do you teach an AI model to give therapy? (MIT Technology Review) 8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad. (Bloomberg $)+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review) 9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable. (Wired $)+ What is vibe coding, exactly? (MIT Technology Review) 10 TikTok really loves hotdogs And who can blame it? (Insider $) Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why? ($)+ Why do dads watch TV standing up? I need to know.
    0 Commentarii 0 Distribuiri
  • Meta Invests $14.3 Billion in Scale AI to Kick-Start Superintelligence Lab

    Meta is making its first major minority investment in an outside company as it tries to catch up to a growing field of artificial intelligence rivals.
    #meta #invests #billion #scale #kickstart
    Meta Invests $14.3 Billion in Scale AI to Kick-Start Superintelligence Lab
    Meta is making its first major minority investment in an outside company as it tries to catch up to a growing field of artificial intelligence rivals. #meta #invests #billion #scale #kickstart
    WWW.NYTIMES.COM
    Meta Invests $14.3 Billion in Scale AI to Kick-Start Superintelligence Lab
    Meta is making its first major minority investment in an outside company as it tries to catch up to a growing field of artificial intelligence rivals.
    0 Commentarii 0 Distribuiri
  • Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?

    Meta is looking to up its weakening AI game with a key talent grab.

    Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts.

    Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO.

    This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence.

    The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity.

    “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.”

    Closing gaps with competitors

    Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.

     “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following.

    Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.”

    But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.”

    Allowing big tech to side-step notification

    But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements.

    The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process.

    Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan.

    Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers.

    However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI.

    Reflecting ‘desperation’ in the AI industry

    Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race.

    “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.”

    However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition.

    Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning.

    All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted.

    “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    #meta #officially #acquihires #scale #will
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence. The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan. Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning. All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.” #meta #officially #acquihires #scale #will
    WWW.COMPUTERWORLD.COM
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest $14.3 billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence (AGI). The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the $14.3 billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X (formerly Twitter), that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commission (FTC) requires mergers and acquisitions totaling more than $126 million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup $650 million in licensing fees and hired much of its team, including co-founders Mustafa Suleyman (now CEO of Microsoft AI) and Karén Simonyan (chief scientist of Microsoft AI). Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Department (DOJ) analyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning (yet). All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think the [gist] of this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    0 Commentarii 0 Distribuiri