• Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’

    In “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future,” Wall Street Journal reporter Keach Hagey examines our AI-obsessed moment through one of its key figures — Sam Altman, co-founder and CEO of OpenAI.
    Hagey begins with Altman’s Midwest childhood, then takes readers through his career at startup Loopt, accelerator Y Combinator, and now at OpenAI. She also sheds new light on the dramatic few days when Altman was fired, then quickly reinstated, as OpenAI’s CEO.
    Looking back at what OpenAI employees now call “the Blip,” Hagey said the failed attempt to oust Altman revealed that OpenAI’s complex structure — with a for-profit company controlled by a nonprofit board — is “not stable.” And with OpenAI largely backing down from plans to let the for-profit side take control, Hagey predicted that this “fundamentally unstable arrangement” will “continue to give investors pause.”
    Does that mean OpenAI could struggle to raise the funds it needs to keep going? Hagey replied that it could “absolutely” be an issue.
    “My research into Sam suggests that he might well be up to that challenge,” she said. “But success is not guaranteed.”
    In addition, Hagey’s biographyexamines Altman’s politics, which she described as “pretty traditionally progressive” — making it a bit surprising that he’s struck massive infrastructure deals with the backing of the Trump administration.
    “But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker,” Hagey said. “Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.”

    Techcrunch event

    now through June 4 for TechCrunch Sessions: AI
    on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    In an interview with TechCrunch, Hagey also discussed Altman’s response to the book, his trustworthiness, and the AI “hype universe.”
    This interview has been edited for length and clarity. 
    You open the book by acknowledging some of the reservations that Sam Altman had about the project —  this idea that we tend to focus too much on individuals rather than organizations or broad movements, and also that it’s way too early to assess the impact of OpenAI. Did you share those concerns?
    Well, I don’t really share them, because this was a biography. This project was to look at a person, not an organization. And I also think that Sam Altman has set himself up in a way where it does matter what kind of moral choices he has made and what his moral formation has been, because the broad project of AI is really a moral project. That is the basis of OpenAI’s existence. So I think these are fair questions to ask about a person, not just an organization.
    As far as whether it’s too soon, I mean, sure, it’s definitelyassess the entire impact of AI. But it’s been an extraordinary story for OpenAI — just so far, it’s already changed the stock market, it has changed the entire narrative of business. I’m a business journalist. We do nothing but talk about AI, all day long, every day. So in that way, I don’t think it’s too early.
    And despite those reservations, Altman did cooperate with you. Can you say more about what your relationship with him was like during the process of researching the book?
    Well, he was definitely not happy when he was informed about the book’s existence. And there was a long period of negotiation, frankly. In the beginning, I figured I was going to write this book without his help — what we call, in the business, a write-around profile. I’ve done plenty of those over my career, and I figured this would just be one more.
    Over time, as I made more and more calls, he opened up a little bit. Andhe was generous to sit down with me several times for long interviews and share his thoughts with me.
    Has he responded to the finished book at all?
    No. He did tweet about the project, about his decision to participate with it, but he was very clear that he was never going to read it. It’s the same way that I don’t like to watch my TV appearances or podcasts that I’m on.
    In the book, he’s described as this emblematic Silicon Valley figure. What do you think are the key characteristics that make him representative of the Valley and the tech industry?
    In the beginning, I think it was that he was young. The Valley really glorifies youth, and he was 19 years old when he started his first startup. You see him going into these meetings with people twice his age, doing deals with telecom operators for his first startup, and no one could get over that this kid was so smart.
    The other is that he is a once-in-a-generation fundraising talent, and that’s really about being a storyteller. I don’t think it’s an accident that you have essentially a salesman and a fundraiser at the top of the most important AI company today,
    That ties into one of the questions that runs through the book — this question about Altman’s trustworthiness. Can you say more about the concerns people seem to have about that? To what extent is he a trustworthy figure? 
    Well, he’s a salesman, so he’s really excellent at getting in a room and convincing people that he can see the future and that he has something in common with them. He gets people to share his vision, which is a rare talent.
    There are people who’ve watched that happen a bunch of times, who think, “Okay, what he says does not always map to reality,” and have, over time, lost trust in him. This happened both at his first startup and very famously at OpenAI, as well as at Y Combinator. So it is a pattern, but I think it’s a typical critique of people who have the salesman skill set.
    So it’s not necessarily that he’s particularly untrustworthy, but it’s part-and-parcel of being a salesman leading these important companies.
    I mean, there also are management issues that are detailed in the book, where he is not great at dealing with conflict, so he’ll basically tell people what they want to hear. That causes a lot of sturm-und-drang in the management ranks, and it’s a pattern. Something like that happened at Loopt, where the executives asked the board to replace him as CEO. And you saw it happen at OpenAI as well.
    You’ve touched on Altman’s firing, which was also covered in a book excerpt that was published in the Wall Street Journal. One of the striking things to me, looking back at it, was just how complicated everything was — all the different factions within the company, all the people who seemed pro-Altman one day and then anti-Altman the next. When you pull back from the details, what do you think is the bigger significance of that incident?
    The very big picture is that the nonprofit governance structure is not stable. You can’t really take investment from the likes of Microsoft and a bunch of other investors and then give them absolutely no say whatsoever in the governance of the company.
    That’s what they have tried to do, but I think what we saw in that firing is how power actually works in the world. When you have stakeholders, even if there’s a piece of paper that says they have no rights, they still have power. And when it became clear that everyone in the company was going to go to Microsoft if they didn’t reinstate Sam Altman, they reinstated Sam Altman.
    In the book, you take the story up to maybe the end of 2024. There have been all these developments since then, which you’ve continued to report on, including this announcement that actually, they’re not fully converting to a for-profit. How do you think that’s going to affect OpenAI going forward? 
    It’s going to make it harder for them to raise money, because they basically had to do an about-face. I know that the new structure going forward of the public benefit corporation is not exactly the same as the current structure of the for-profit — it is a little bit more investor friendly, it does clarify some of those things.
    But overall, what you have is a nonprofit board that controls a for-profit company, and that fundamentally unstable arrangement is what led to the so-called Blip. And I think you would continue to give investors pause, going forward, if they are going to have so little control over their investment.
    Obviously, OpenAI is still such a capital intensive business. If they have challenges raising more money, is that an existential question for the company?
    It absolutely could be. My research into Sam suggests that he might well be up to that challenge. But success is not guaranteed.
    Like you said, there’s a dual perspective in the book that’s partly about who Sam is, and partly about what that says about where AI is going from here. How did that research into his particular story shape the way you now look at these broader debates about AI and society?
    I went down a rabbit hole in the beginning of the book,into Sam’s father, Jerry Altman, in part because I thought it was striking how he’d been written out of basically every other thing that had ever been written about Sam Altman. What I found in this research was a very idealistic man who was, from youth, very interested in these public-private partnerships and the power of the government to set policy. He ended up having an impact on the way that affordable housing is still financed to this day.
    And when I traced Sam’s development, I saw that he has long believed that the government should really be the one that is funding and guiding AI research. In the early days of OpenAI, they went and tried to get the government to invest, as he’s publicly said, and it didn’t work out. But he looks back to these great mid-20th century labs like Xerox PARC and Bell Labs, which are private, but there was a ton of government money running through and supporting that ecosystem. And he says, “That’s the right way to do it.”
    Now I am watching daily as it seems like the United States is summoning the forces of state capitalism to get behind Sam Altman’s project to build these data centers, both in the United States and now there was just one last week announced in Abu Dhabi. This is a vision he has had for a very, very long time.
    My sense of the vision, as he presented it earlier, was one where, on the one hand, the government is funding these things and building this infrastructure, and on the other hand, the government is also regulating and guiding AI development for safety purposes. And it now seems like the path being pursued is one where they’re backing away from the safety side and doubling down on the government investment side.
    Absolutely. Isn’t it fascinating? 
    You talk about Sam as a political figure, as someone who’s had political ambitions at different times, but also somebody who has what are in many ways traditionally liberal political views while being friends with folks like — at least early on — Elon Musk and Peter Thiel. And he’s done a very good job of navigating the Trump administration. What do you think his politics are right now?
    I’m not sure his actual politics have changed, they are pretty traditionally progressive politics. Not completely — he’s been critical about things like cancel culture, but in general, he thinks the government is there to take tax revenue and solve problems.
    His success in the Trump administration has been fascinating because he has been able to find their one area of overlap, which is the desire to build a lot of data centers, and just double down on that and not talk about any other stuff. But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker. Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.
    You open and close the book not just with Sam’s father, but with his family as a whole. What else is worth highlighting in terms of how his upbringing and family shapes who he is now?
    Well, you see both the idealism from his father and also the incredible ambition from his mother, who was a doctor, and had four kids and worked as a dermatologist. I think both of these things work together to shape him. They also had a more troubled marriage than I realized going into the book. So I do think that there’s some anxiety there that Sam himself is very upfront about, that he was a pretty anxious person for much of his life, until he did some meditation and had some experiences.
    And there’s his current family — he just had a baby and got married not too long ago. As a young gay man, growing up in the Midwest, he had to overcome some challenges, and I think those challenges both forged him in high school as a brave person who could stand up and take on a room as a public speaker, but also shaped his optimistic view of the world. Because, on that issue, I paint the scene of his wedding: That’s an unimaginable thing from the early ‘90s, or from the ‘80s when he was born. He’s watched society develop and progress in very tangible ways, and I do think that that has helped solidify his faith in progress.
    Something that I’ve found writing about AI is that the different visions being presented by people in the field can be so diametrically opposed. You have these wildly utopian visions, but also these warnings that AI could end the world. It gets so hyperbolic that it feels like people are not living in the same reality. Was that a challenge for you in writing the book?
    Well, I see those two visions — which feel very far apart — actually being part of the same vision, which is that AI is super important, and it’s going to completely transform everything. No one ever talks about the true opposite of that, which is, “Maybe this is going to be a cool enterprise tool, another way to waste time on the internet, and not quite change everything as much as everyone thinks.” So I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe.
    As a journalist and as a biographer, you don’t necessarily come down on one side or the other — but actually, can you say where you come down on that?
    Well, I will say that I find myself using it a lot more recently, because it’s gotten a lot better. In the early stages, when I was researching the book, I was definitely a lot more skeptical of its transformative economic power. I’m less skeptical now, because I just use it a lot more.
    #sam #altman #biographer #keach #hagey
    Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’
    In “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future,” Wall Street Journal reporter Keach Hagey examines our AI-obsessed moment through one of its key figures — Sam Altman, co-founder and CEO of OpenAI. Hagey begins with Altman’s Midwest childhood, then takes readers through his career at startup Loopt, accelerator Y Combinator, and now at OpenAI. She also sheds new light on the dramatic few days when Altman was fired, then quickly reinstated, as OpenAI’s CEO. Looking back at what OpenAI employees now call “the Blip,” Hagey said the failed attempt to oust Altman revealed that OpenAI’s complex structure — with a for-profit company controlled by a nonprofit board — is “not stable.” And with OpenAI largely backing down from plans to let the for-profit side take control, Hagey predicted that this “fundamentally unstable arrangement” will “continue to give investors pause.” Does that mean OpenAI could struggle to raise the funds it needs to keep going? Hagey replied that it could “absolutely” be an issue. “My research into Sam suggests that he might well be up to that challenge,” she said. “But success is not guaranteed.” In addition, Hagey’s biographyexamines Altman’s politics, which she described as “pretty traditionally progressive” — making it a bit surprising that he’s struck massive infrastructure deals with the backing of the Trump administration. “But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker,” Hagey said. “Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.” Techcrunch event now through June 4 for TechCrunch Sessions: AI on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW In an interview with TechCrunch, Hagey also discussed Altman’s response to the book, his trustworthiness, and the AI “hype universe.” This interview has been edited for length and clarity.  You open the book by acknowledging some of the reservations that Sam Altman had about the project —  this idea that we tend to focus too much on individuals rather than organizations or broad movements, and also that it’s way too early to assess the impact of OpenAI. Did you share those concerns? Well, I don’t really share them, because this was a biography. This project was to look at a person, not an organization. And I also think that Sam Altman has set himself up in a way where it does matter what kind of moral choices he has made and what his moral formation has been, because the broad project of AI is really a moral project. That is the basis of OpenAI’s existence. So I think these are fair questions to ask about a person, not just an organization. As far as whether it’s too soon, I mean, sure, it’s definitelyassess the entire impact of AI. But it’s been an extraordinary story for OpenAI — just so far, it’s already changed the stock market, it has changed the entire narrative of business. I’m a business journalist. We do nothing but talk about AI, all day long, every day. So in that way, I don’t think it’s too early. And despite those reservations, Altman did cooperate with you. Can you say more about what your relationship with him was like during the process of researching the book? Well, he was definitely not happy when he was informed about the book’s existence. And there was a long period of negotiation, frankly. In the beginning, I figured I was going to write this book without his help — what we call, in the business, a write-around profile. I’ve done plenty of those over my career, and I figured this would just be one more. Over time, as I made more and more calls, he opened up a little bit. Andhe was generous to sit down with me several times for long interviews and share his thoughts with me. Has he responded to the finished book at all? No. He did tweet about the project, about his decision to participate with it, but he was very clear that he was never going to read it. It’s the same way that I don’t like to watch my TV appearances or podcasts that I’m on. In the book, he’s described as this emblematic Silicon Valley figure. What do you think are the key characteristics that make him representative of the Valley and the tech industry? In the beginning, I think it was that he was young. The Valley really glorifies youth, and he was 19 years old when he started his first startup. You see him going into these meetings with people twice his age, doing deals with telecom operators for his first startup, and no one could get over that this kid was so smart. The other is that he is a once-in-a-generation fundraising talent, and that’s really about being a storyteller. I don’t think it’s an accident that you have essentially a salesman and a fundraiser at the top of the most important AI company today, That ties into one of the questions that runs through the book — this question about Altman’s trustworthiness. Can you say more about the concerns people seem to have about that? To what extent is he a trustworthy figure?  Well, he’s a salesman, so he’s really excellent at getting in a room and convincing people that he can see the future and that he has something in common with them. He gets people to share his vision, which is a rare talent. There are people who’ve watched that happen a bunch of times, who think, “Okay, what he says does not always map to reality,” and have, over time, lost trust in him. This happened both at his first startup and very famously at OpenAI, as well as at Y Combinator. So it is a pattern, but I think it’s a typical critique of people who have the salesman skill set. So it’s not necessarily that he’s particularly untrustworthy, but it’s part-and-parcel of being a salesman leading these important companies. I mean, there also are management issues that are detailed in the book, where he is not great at dealing with conflict, so he’ll basically tell people what they want to hear. That causes a lot of sturm-und-drang in the management ranks, and it’s a pattern. Something like that happened at Loopt, where the executives asked the board to replace him as CEO. And you saw it happen at OpenAI as well. You’ve touched on Altman’s firing, which was also covered in a book excerpt that was published in the Wall Street Journal. One of the striking things to me, looking back at it, was just how complicated everything was — all the different factions within the company, all the people who seemed pro-Altman one day and then anti-Altman the next. When you pull back from the details, what do you think is the bigger significance of that incident? The very big picture is that the nonprofit governance structure is not stable. You can’t really take investment from the likes of Microsoft and a bunch of other investors and then give them absolutely no say whatsoever in the governance of the company. That’s what they have tried to do, but I think what we saw in that firing is how power actually works in the world. When you have stakeholders, even if there’s a piece of paper that says they have no rights, they still have power. And when it became clear that everyone in the company was going to go to Microsoft if they didn’t reinstate Sam Altman, they reinstated Sam Altman. In the book, you take the story up to maybe the end of 2024. There have been all these developments since then, which you’ve continued to report on, including this announcement that actually, they’re not fully converting to a for-profit. How do you think that’s going to affect OpenAI going forward?  It’s going to make it harder for them to raise money, because they basically had to do an about-face. I know that the new structure going forward of the public benefit corporation is not exactly the same as the current structure of the for-profit — it is a little bit more investor friendly, it does clarify some of those things. But overall, what you have is a nonprofit board that controls a for-profit company, and that fundamentally unstable arrangement is what led to the so-called Blip. And I think you would continue to give investors pause, going forward, if they are going to have so little control over their investment. Obviously, OpenAI is still such a capital intensive business. If they have challenges raising more money, is that an existential question for the company? It absolutely could be. My research into Sam suggests that he might well be up to that challenge. But success is not guaranteed. Like you said, there’s a dual perspective in the book that’s partly about who Sam is, and partly about what that says about where AI is going from here. How did that research into his particular story shape the way you now look at these broader debates about AI and society? I went down a rabbit hole in the beginning of the book,into Sam’s father, Jerry Altman, in part because I thought it was striking how he’d been written out of basically every other thing that had ever been written about Sam Altman. What I found in this research was a very idealistic man who was, from youth, very interested in these public-private partnerships and the power of the government to set policy. He ended up having an impact on the way that affordable housing is still financed to this day. And when I traced Sam’s development, I saw that he has long believed that the government should really be the one that is funding and guiding AI research. In the early days of OpenAI, they went and tried to get the government to invest, as he’s publicly said, and it didn’t work out. But he looks back to these great mid-20th century labs like Xerox PARC and Bell Labs, which are private, but there was a ton of government money running through and supporting that ecosystem. And he says, “That’s the right way to do it.” Now I am watching daily as it seems like the United States is summoning the forces of state capitalism to get behind Sam Altman’s project to build these data centers, both in the United States and now there was just one last week announced in Abu Dhabi. This is a vision he has had for a very, very long time. My sense of the vision, as he presented it earlier, was one where, on the one hand, the government is funding these things and building this infrastructure, and on the other hand, the government is also regulating and guiding AI development for safety purposes. And it now seems like the path being pursued is one where they’re backing away from the safety side and doubling down on the government investment side. Absolutely. Isn’t it fascinating?  You talk about Sam as a political figure, as someone who’s had political ambitions at different times, but also somebody who has what are in many ways traditionally liberal political views while being friends with folks like — at least early on — Elon Musk and Peter Thiel. And he’s done a very good job of navigating the Trump administration. What do you think his politics are right now? I’m not sure his actual politics have changed, they are pretty traditionally progressive politics. Not completely — he’s been critical about things like cancel culture, but in general, he thinks the government is there to take tax revenue and solve problems. His success in the Trump administration has been fascinating because he has been able to find their one area of overlap, which is the desire to build a lot of data centers, and just double down on that and not talk about any other stuff. But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker. Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at. You open and close the book not just with Sam’s father, but with his family as a whole. What else is worth highlighting in terms of how his upbringing and family shapes who he is now? Well, you see both the idealism from his father and also the incredible ambition from his mother, who was a doctor, and had four kids and worked as a dermatologist. I think both of these things work together to shape him. They also had a more troubled marriage than I realized going into the book. So I do think that there’s some anxiety there that Sam himself is very upfront about, that he was a pretty anxious person for much of his life, until he did some meditation and had some experiences. And there’s his current family — he just had a baby and got married not too long ago. As a young gay man, growing up in the Midwest, he had to overcome some challenges, and I think those challenges both forged him in high school as a brave person who could stand up and take on a room as a public speaker, but also shaped his optimistic view of the world. Because, on that issue, I paint the scene of his wedding: That’s an unimaginable thing from the early ‘90s, or from the ‘80s when he was born. He’s watched society develop and progress in very tangible ways, and I do think that that has helped solidify his faith in progress. Something that I’ve found writing about AI is that the different visions being presented by people in the field can be so diametrically opposed. You have these wildly utopian visions, but also these warnings that AI could end the world. It gets so hyperbolic that it feels like people are not living in the same reality. Was that a challenge for you in writing the book? Well, I see those two visions — which feel very far apart — actually being part of the same vision, which is that AI is super important, and it’s going to completely transform everything. No one ever talks about the true opposite of that, which is, “Maybe this is going to be a cool enterprise tool, another way to waste time on the internet, and not quite change everything as much as everyone thinks.” So I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe. As a journalist and as a biographer, you don’t necessarily come down on one side or the other — but actually, can you say where you come down on that? Well, I will say that I find myself using it a lot more recently, because it’s gotten a lot better. In the early stages, when I was researching the book, I was definitely a lot more skeptical of its transformative economic power. I’m less skeptical now, because I just use it a lot more. #sam #altman #biographer #keach #hagey
    TECHCRUNCH.COM
    Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’
    In “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future,” Wall Street Journal reporter Keach Hagey examines our AI-obsessed moment through one of its key figures — Sam Altman, co-founder and CEO of OpenAI. Hagey begins with Altman’s Midwest childhood, then takes readers through his career at startup Loopt, accelerator Y Combinator, and now at OpenAI. She also sheds new light on the dramatic few days when Altman was fired, then quickly reinstated, as OpenAI’s CEO. Looking back at what OpenAI employees now call “the Blip,” Hagey said the failed attempt to oust Altman revealed that OpenAI’s complex structure — with a for-profit company controlled by a nonprofit board — is “not stable.” And with OpenAI largely backing down from plans to let the for-profit side take control, Hagey predicted that this “fundamentally unstable arrangement” will “continue to give investors pause.” Does that mean OpenAI could struggle to raise the funds it needs to keep going? Hagey replied that it could “absolutely” be an issue. “My research into Sam suggests that he might well be up to that challenge,” she said. “But success is not guaranteed.” In addition, Hagey’s biography (also available as an audiobook on Spotify) examines Altman’s politics, which she described as “pretty traditionally progressive” — making it a bit surprising that he’s struck massive infrastructure deals with the backing of the Trump administration. “But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker,” Hagey said. “Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.” Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW In an interview with TechCrunch, Hagey also discussed Altman’s response to the book, his trustworthiness, and the AI “hype universe.” This interview has been edited for length and clarity.  You open the book by acknowledging some of the reservations that Sam Altman had about the project —  this idea that we tend to focus too much on individuals rather than organizations or broad movements, and also that it’s way too early to assess the impact of OpenAI. Did you share those concerns? Well, I don’t really share them, because this was a biography. This project was to look at a person, not an organization. And I also think that Sam Altman has set himself up in a way where it does matter what kind of moral choices he has made and what his moral formation has been, because the broad project of AI is really a moral project. That is the basis of OpenAI’s existence. So I think these are fair questions to ask about a person, not just an organization. As far as whether it’s too soon, I mean, sure, it’s definitely [early to] assess the entire impact of AI. But it’s been an extraordinary story for OpenAI — just so far, it’s already changed the stock market, it has changed the entire narrative of business. I’m a business journalist. We do nothing but talk about AI, all day long, every day. So in that way, I don’t think it’s too early. And despite those reservations, Altman did cooperate with you. Can you say more about what your relationship with him was like during the process of researching the book? Well, he was definitely not happy when he was informed about the book’s existence. And there was a long period of negotiation, frankly. In the beginning, I figured I was going to write this book without his help — what we call, in the business, a write-around profile. I’ve done plenty of those over my career, and I figured this would just be one more. Over time, as I made more and more calls, he opened up a little bit. And [eventually,] he was generous to sit down with me several times for long interviews and share his thoughts with me. Has he responded to the finished book at all? No. He did tweet about the project, about his decision to participate with it, but he was very clear that he was never going to read it. It’s the same way that I don’t like to watch my TV appearances or podcasts that I’m on. In the book, he’s described as this emblematic Silicon Valley figure. What do you think are the key characteristics that make him representative of the Valley and the tech industry? In the beginning, I think it was that he was young. The Valley really glorifies youth, and he was 19 years old when he started his first startup. You see him going into these meetings with people twice his age, doing deals with telecom operators for his first startup, and no one could get over that this kid was so smart. The other is that he is a once-in-a-generation fundraising talent, and that’s really about being a storyteller. I don’t think it’s an accident that you have essentially a salesman and a fundraiser at the top of the most important AI company today, That ties into one of the questions that runs through the book — this question about Altman’s trustworthiness. Can you say more about the concerns people seem to have about that? To what extent is he a trustworthy figure?  Well, he’s a salesman, so he’s really excellent at getting in a room and convincing people that he can see the future and that he has something in common with them. He gets people to share his vision, which is a rare talent. There are people who’ve watched that happen a bunch of times, who think, “Okay, what he says does not always map to reality,” and have, over time, lost trust in him. This happened both at his first startup and very famously at OpenAI, as well as at Y Combinator. So it is a pattern, but I think it’s a typical critique of people who have the salesman skill set. So it’s not necessarily that he’s particularly untrustworthy, but it’s part-and-parcel of being a salesman leading these important companies. I mean, there also are management issues that are detailed in the book, where he is not great at dealing with conflict, so he’ll basically tell people what they want to hear. That causes a lot of sturm-und-drang in the management ranks, and it’s a pattern. Something like that happened at Loopt, where the executives asked the board to replace him as CEO. And you saw it happen at OpenAI as well. You’ve touched on Altman’s firing, which was also covered in a book excerpt that was published in the Wall Street Journal. One of the striking things to me, looking back at it, was just how complicated everything was — all the different factions within the company, all the people who seemed pro-Altman one day and then anti-Altman the next. When you pull back from the details, what do you think is the bigger significance of that incident? The very big picture is that the nonprofit governance structure is not stable. You can’t really take investment from the likes of Microsoft and a bunch of other investors and then give them absolutely no say whatsoever in the governance of the company. That’s what they have tried to do, but I think what we saw in that firing is how power actually works in the world. When you have stakeholders, even if there’s a piece of paper that says they have no rights, they still have power. And when it became clear that everyone in the company was going to go to Microsoft if they didn’t reinstate Sam Altman, they reinstated Sam Altman. In the book, you take the story up to maybe the end of 2024. There have been all these developments since then, which you’ve continued to report on, including this announcement that actually, they’re not fully converting to a for-profit. How do you think that’s going to affect OpenAI going forward?  It’s going to make it harder for them to raise money, because they basically had to do an about-face. I know that the new structure going forward of the public benefit corporation is not exactly the same as the current structure of the for-profit — it is a little bit more investor friendly, it does clarify some of those things. But overall, what you have is a nonprofit board that controls a for-profit company, and that fundamentally unstable arrangement is what led to the so-called Blip. And I think you would continue to give investors pause, going forward, if they are going to have so little control over their investment. Obviously, OpenAI is still such a capital intensive business. If they have challenges raising more money, is that an existential question for the company? It absolutely could be. My research into Sam suggests that he might well be up to that challenge. But success is not guaranteed. Like you said, there’s a dual perspective in the book that’s partly about who Sam is, and partly about what that says about where AI is going from here. How did that research into his particular story shape the way you now look at these broader debates about AI and society? I went down a rabbit hole in the beginning of the book, [looking] into Sam’s father, Jerry Altman, in part because I thought it was striking how he’d been written out of basically every other thing that had ever been written about Sam Altman. What I found in this research was a very idealistic man who was, from youth, very interested in these public-private partnerships and the power of the government to set policy. He ended up having an impact on the way that affordable housing is still financed to this day. And when I traced Sam’s development, I saw that he has long believed that the government should really be the one that is funding and guiding AI research. In the early days of OpenAI, they went and tried to get the government to invest, as he’s publicly said, and it didn’t work out. But he looks back to these great mid-20th century labs like Xerox PARC and Bell Labs, which are private, but there was a ton of government money running through and supporting that ecosystem. And he says, “That’s the right way to do it.” Now I am watching daily as it seems like the United States is summoning the forces of state capitalism to get behind Sam Altman’s project to build these data centers, both in the United States and now there was just one last week announced in Abu Dhabi. This is a vision he has had for a very, very long time. My sense of the vision, as he presented it earlier, was one where, on the one hand, the government is funding these things and building this infrastructure, and on the other hand, the government is also regulating and guiding AI development for safety purposes. And it now seems like the path being pursued is one where they’re backing away from the safety side and doubling down on the government investment side. Absolutely. Isn’t it fascinating?  You talk about Sam as a political figure, as someone who’s had political ambitions at different times, but also somebody who has what are in many ways traditionally liberal political views while being friends with folks like — at least early on — Elon Musk and Peter Thiel. And he’s done a very good job of navigating the Trump administration. What do you think his politics are right now? I’m not sure his actual politics have changed, they are pretty traditionally progressive politics. Not completely — he’s been critical about things like cancel culture, but in general, he thinks the government is there to take tax revenue and solve problems. His success in the Trump administration has been fascinating because he has been able to find their one area of overlap, which is the desire to build a lot of data centers, and just double down on that and not talk about any other stuff. But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker. Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at. You open and close the book not just with Sam’s father, but with his family as a whole. What else is worth highlighting in terms of how his upbringing and family shapes who he is now? Well, you see both the idealism from his father and also the incredible ambition from his mother, who was a doctor, and had four kids and worked as a dermatologist. I think both of these things work together to shape him. They also had a more troubled marriage than I realized going into the book. So I do think that there’s some anxiety there that Sam himself is very upfront about, that he was a pretty anxious person for much of his life, until he did some meditation and had some experiences. And there’s his current family — he just had a baby and got married not too long ago. As a young gay man, growing up in the Midwest, he had to overcome some challenges, and I think those challenges both forged him in high school as a brave person who could stand up and take on a room as a public speaker, but also shaped his optimistic view of the world. Because, on that issue, I paint the scene of his wedding: That’s an unimaginable thing from the early ‘90s, or from the ‘80s when he was born. He’s watched society develop and progress in very tangible ways, and I do think that that has helped solidify his faith in progress. Something that I’ve found writing about AI is that the different visions being presented by people in the field can be so diametrically opposed. You have these wildly utopian visions, but also these warnings that AI could end the world. It gets so hyperbolic that it feels like people are not living in the same reality. Was that a challenge for you in writing the book? Well, I see those two visions — which feel very far apart — actually being part of the same vision, which is that AI is super important, and it’s going to completely transform everything. No one ever talks about the true opposite of that, which is, “Maybe this is going to be a cool enterprise tool, another way to waste time on the internet, and not quite change everything as much as everyone thinks.” So I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe. As a journalist and as a biographer, you don’t necessarily come down on one side or the other — but actually, can you say where you come down on that? Well, I will say that I find myself using it a lot more recently, because it’s gotten a lot better. In the early stages, when I was researching the book, I was definitely a lot more skeptical of its transformative economic power. I’m less skeptical now, because I just use it a lot more.
    0 Commentarios 0 Acciones
  • #333;">How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con.
    It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us.
    Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI.
    Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence.
    It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically.
    "It didn't spring whole cloth out of Zeus's head or anything.
    This has a longer history," Hanna said in an interview with CNET.
    "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development.
    The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing.
    And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development.
    Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s.
    Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon.
    Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money.
    But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below.
    The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype.
    An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading.
    AI chatbots aren't capable of seeing of thinking because they don't have brains.
    Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language.
    We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said.
    "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say.
    "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said.
    "And it is very hard to remind ourselves that the mind isn't there.
    It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators.
    It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything.
    AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers.
    As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it.
    "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said.
    In "certain domains, like pattern matching at scale, computers are quite good at that.
    But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence.
    Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks.
    There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction.
    Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios.
    The boosters imagine an AI-powered futuristic society.
    The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable.
    "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said.
    "And then there's this claim that this particular technology is a step on that path, and it's all marketing.
    It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors.
    Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals.
    For better or worse, life is not science fiction.
    Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
    Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates.
    Many AI companies won't tell you what content is used to train their models.
    But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors.
    That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained.
    There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm.
    "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said.
    Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness.
    Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag.
    "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed.
    But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information.
    For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    #0066cc;">#how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    المصدر: www.cnet.com
    #how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    WWW.CNET.COM
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    0 Commentarios 0 Acciones