• OpenAI: The power and the pride

    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question. 

    “If you hadetched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.” 

    There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. 

    In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. 

    Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work. 

    With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes. 

    The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells itis very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” toillegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.  

    The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes.

    “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.” 

    To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers. 

    She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it. 

    Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project …shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.” 

    Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.” 

    Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam. Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics. 

    Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.
    #openai #power #pride
    OpenAI: The power and the pride
    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question.  “If you hadetched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.”  There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution.  In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others.  Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work.  With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes.  The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells itis very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” toillegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.   The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes. “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”  To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers.  She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it.  Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project …shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.”  Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.”  Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam. Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics.  Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square. #openai #power #pride
    WWW.TECHNOLOGYREVIEW.COM
    OpenAI: The power and the pride
    In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question.  “If you had [GPT-4’s model weights] etched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior, “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.”  There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution.  In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others.  Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work.  With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes.  The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells it (and as Hagey does too) is very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” to (at least in the eyes of the courts) illegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product out of an abundance of caution to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.   The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes. “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”  To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers.  She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it.  Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project … [The New Zealand model] shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.”  Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. She quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.”  Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half, when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam (something he and the rest of the Altman family vehemently deny). Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics.  Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.
    0 Commentarios 0 Acciones
  • What we've been playing - New York, Poker, and frustration

    What we've been playing - New York, Poker, and frustration
    A few of the things that have us hooked this week.

    Image credit: FromSoftware

    Feature

    by Robert Purchese
    Associate Editor

    Additional contributions by
    Ed Nightingale, and
    Jim Trinca

    Published on May 24, 2025

    24th May
    Hello and welcome back to our regular feature where we write a little bit about some of the games we've been playing. This week, Bertie caves and installs the time-hogging phenomenon known as Balatro; Jim returns to the noir-like artistry of Grand Theft Auto 4; and Ed bangs his head repeatedly against Sekiro.
    What have you been playing?
    Catch up with the older editions of this column in our What We've Been Playing archive.
    Balatro, PS5

    Snap! Wait, that's not the right game, is it?Watch on YouTube
    I did it: I finally caved and played Balatro. It's free with PlayStation Plus at the moment so I thought why not? Let me explain that hesitation quickly. I've never really liked Poker. I tend to defiantly not like what everyone else likes, I don't know why, and I also struggle to be serious for extended periods of time. The thought of sitting around a table with a 'Poker' face on, for hours on end, seems like torture to me.
    But I bit, and guess what? No surprise: I really liked it. I had to search for what a couple of the poker hands meant, because I didn't know my flushes from my straights - and I guess there's some assumed knowledge on the game's part there - but otherwise, I wasstraight in. Time to being hooked: about five minutes.
    I love the immediacy of games like this. I know I'm predisposed to liking quick-play deckbuilding games - they just work wonderfully with my mental wiring - but there's clearly a skill to onboarding people in a way that's fun and frictionless, and Balatro has got it. There's no waiting for the game to begin, you just press go and learn as you play.
    Anyway, brb, see you in a few hundred hours.
    -Bertie
    Grand Theft Auto 4

    Which GTA protagonists are the best?Watch on YouTube
    I've been replaying GTA 4 for a Thing I'm working on and rediscovering just how bold a game it is. Big budget video games tend to default to a sort of pseudo-photorealism as their visual style, and there's nothing wrong with that. As we know from a century of pointing lights and cameras at real actors, there is plenty of scope for creativity within that. But it is often a safe choice. With a triple-A budget comes the expectation to have the triple-A 'look', essentially mimicking what the real lights, cameras, and actors are doing at the time.
    GTA 4 doesn't have that look. It looks like GTA 4, with its unmistakable forever autumn draping a decaying urban sprawl in soft baths of burnt orange. With its desaturated neo-noir nights pocked with bursts of colour where city lights cut the dour air.
    It's a look that fully serves the themes of the game: a dismantling of the American Dream as experienced through the eyes of an immigrant - a war-damaged man fleeing a war-damaged society, only to find, like millions of people before him, that the problems from an old world tend to follow you to the new.
    Niko’s is a bleak life with fleeting moments of triumph and fleeting moments of levity, and his Liberty City reflects this in every flaking piece of paint and every particle of billowing trash. GTA 4 sticks resolutely and defiantly to its aesthetic of grime and decay in much the same way the underrated shooter Kane and Lynch 2: Dog Days did, in
    sending the player into an unwaveringly grim handicam snuff film and revelling in their discomfort. Both games are miraculous works of art.
    Plus in GTA 4, the stockmarket is called BAWSAQ, which is funny.
    -Jim
    Sekiro: Shadows Die Twice, PS4

    Here's Aoife sharing in some of Ed's Sekiro frustration.Watch on YouTube
    I don't think I've ever been as angry as when I play Sekiro. I'm not just talking about being a bit frustrated. I'm talking 'existential why the hell am I doing this to myself' despondency. I am not enjoying it, but I can't stop playing it.
    I know I shouldn't let it get to me. Get a grip Ed, it's just a silly little video game. I should really just learn to git gud, right? But: sigh.
    For context, this is the last big FromSoftware game I'm yet to finish, and I've started it three times now. I'm determined to finish it - I've come too far with these games to stop now. But Sekiro just hasn't clicked for me like the studio's other games have. In part that's down to aesthetics, I think, as I just vibe more with the dark fantasy of Souls and twisted Gothism of Bloodborne than I do the Japanese horror of Sekiro.
    But also it's to do with combat. It's so focused on a single method of fighting - parry parry parry - that there's no room for the expression or build variety that I really like. I do enjoy how rhythmical parrying can be, but each boss encounter feels like I'm banging my head against a wall, much more so than any other game of this type. At least the end is in sight as I only have the final boss to go.
    At this point I'm just playing Sekiro out of stubbornness and spite, and I'm not sure what to be disappointed in, the game or myself.
    -Ed
    #what #we039ve #been #playing #new
    What we've been playing - New York, Poker, and frustration
    What we've been playing - New York, Poker, and frustration A few of the things that have us hooked this week. Image credit: FromSoftware Feature by Robert Purchese Associate Editor Additional contributions by Ed Nightingale, and Jim Trinca Published on May 24, 2025 24th May Hello and welcome back to our regular feature where we write a little bit about some of the games we've been playing. This week, Bertie caves and installs the time-hogging phenomenon known as Balatro; Jim returns to the noir-like artistry of Grand Theft Auto 4; and Ed bangs his head repeatedly against Sekiro. What have you been playing? Catch up with the older editions of this column in our What We've Been Playing archive. Balatro, PS5 Snap! Wait, that's not the right game, is it?Watch on YouTube I did it: I finally caved and played Balatro. It's free with PlayStation Plus at the moment so I thought why not? Let me explain that hesitation quickly. I've never really liked Poker. I tend to defiantly not like what everyone else likes, I don't know why, and I also struggle to be serious for extended periods of time. The thought of sitting around a table with a 'Poker' face on, for hours on end, seems like torture to me. But I bit, and guess what? No surprise: I really liked it. I had to search for what a couple of the poker hands meant, because I didn't know my flushes from my straights - and I guess there's some assumed knowledge on the game's part there - but otherwise, I wasstraight in. Time to being hooked: about five minutes. I love the immediacy of games like this. I know I'm predisposed to liking quick-play deckbuilding games - they just work wonderfully with my mental wiring - but there's clearly a skill to onboarding people in a way that's fun and frictionless, and Balatro has got it. There's no waiting for the game to begin, you just press go and learn as you play. Anyway, brb, see you in a few hundred hours. -Bertie Grand Theft Auto 4 Which GTA protagonists are the best?Watch on YouTube I've been replaying GTA 4 for a Thing I'm working on and rediscovering just how bold a game it is. Big budget video games tend to default to a sort of pseudo-photorealism as their visual style, and there's nothing wrong with that. As we know from a century of pointing lights and cameras at real actors, there is plenty of scope for creativity within that. But it is often a safe choice. With a triple-A budget comes the expectation to have the triple-A 'look', essentially mimicking what the real lights, cameras, and actors are doing at the time. GTA 4 doesn't have that look. It looks like GTA 4, with its unmistakable forever autumn draping a decaying urban sprawl in soft baths of burnt orange. With its desaturated neo-noir nights pocked with bursts of colour where city lights cut the dour air. It's a look that fully serves the themes of the game: a dismantling of the American Dream as experienced through the eyes of an immigrant - a war-damaged man fleeing a war-damaged society, only to find, like millions of people before him, that the problems from an old world tend to follow you to the new. Niko’s is a bleak life with fleeting moments of triumph and fleeting moments of levity, and his Liberty City reflects this in every flaking piece of paint and every particle of billowing trash. GTA 4 sticks resolutely and defiantly to its aesthetic of grime and decay in much the same way the underrated shooter Kane and Lynch 2: Dog Days did, in sending the player into an unwaveringly grim handicam snuff film and revelling in their discomfort. Both games are miraculous works of art. Plus in GTA 4, the stockmarket is called BAWSAQ, which is funny. -Jim Sekiro: Shadows Die Twice, PS4 Here's Aoife sharing in some of Ed's Sekiro frustration.Watch on YouTube I don't think I've ever been as angry as when I play Sekiro. I'm not just talking about being a bit frustrated. I'm talking 'existential why the hell am I doing this to myself' despondency. I am not enjoying it, but I can't stop playing it. I know I shouldn't let it get to me. Get a grip Ed, it's just a silly little video game. I should really just learn to git gud, right? But: sigh. For context, this is the last big FromSoftware game I'm yet to finish, and I've started it three times now. I'm determined to finish it - I've come too far with these games to stop now. But Sekiro just hasn't clicked for me like the studio's other games have. In part that's down to aesthetics, I think, as I just vibe more with the dark fantasy of Souls and twisted Gothism of Bloodborne than I do the Japanese horror of Sekiro. But also it's to do with combat. It's so focused on a single method of fighting - parry parry parry - that there's no room for the expression or build variety that I really like. I do enjoy how rhythmical parrying can be, but each boss encounter feels like I'm banging my head against a wall, much more so than any other game of this type. At least the end is in sight as I only have the final boss to go. At this point I'm just playing Sekiro out of stubbornness and spite, and I'm not sure what to be disappointed in, the game or myself. -Ed #what #we039ve #been #playing #new
    WWW.EUROGAMER.NET
    What we've been playing - New York, Poker, and frustration
    What we've been playing - New York, Poker, and frustration A few of the things that have us hooked this week. Image credit: FromSoftware Feature by Robert Purchese Associate Editor Additional contributions by Ed Nightingale, and Jim Trinca Published on May 24, 2025 24th May Hello and welcome back to our regular feature where we write a little bit about some of the games we've been playing. This week, Bertie caves and installs the time-hogging phenomenon known as Balatro; Jim returns to the noir-like artistry of Grand Theft Auto 4; and Ed bangs his head repeatedly against Sekiro. What have you been playing? Catch up with the older editions of this column in our What We've Been Playing archive. Balatro, PS5 Snap! Wait, that's not the right game, is it?Watch on YouTube I did it: I finally caved and played Balatro. It's free with PlayStation Plus at the moment so I thought why not? Let me explain that hesitation quickly. I've never really liked Poker. I tend to defiantly not like what everyone else likes, I don't know why, and I also struggle to be serious for extended periods of time. The thought of sitting around a table with a 'Poker' face on, for hours on end, seems like torture to me. But I bit, and guess what? No surprise: I really liked it. I had to search for what a couple of the poker hands meant, because I didn't know my flushes from my straights - and I guess there's some assumed knowledge on the game's part there - but otherwise, I was (ahem) straight in. Time to being hooked: about five minutes. I love the immediacy of games like this. I know I'm predisposed to liking quick-play deckbuilding games - they just work wonderfully with my mental wiring - but there's clearly a skill to onboarding people in a way that's fun and frictionless, and Balatro has got it. There's no waiting for the game to begin, you just press go and learn as you play. Anyway, brb, see you in a few hundred hours. -Bertie Grand Theft Auto 4 Which GTA protagonists are the best?Watch on YouTube I've been replaying GTA 4 for a Thing I'm working on and rediscovering just how bold a game it is. Big budget video games tend to default to a sort of pseudo-photorealism as their visual style, and there's nothing wrong with that. As we know from a century of pointing lights and cameras at real actors, there is plenty of scope for creativity within that. But it is often a safe choice. With a triple-A budget comes the expectation to have the triple-A 'look', essentially mimicking what the real lights, cameras, and actors are doing at the time. GTA 4 doesn't have that look. It looks like GTA 4, with its unmistakable forever autumn draping a decaying urban sprawl in soft baths of burnt orange. With its desaturated neo-noir nights pocked with bursts of colour where city lights cut the dour air. It's a look that fully serves the themes of the game: a dismantling of the American Dream as experienced through the eyes of an immigrant - a war-damaged man fleeing a war-damaged society, only to find, like millions of people before him, that the problems from an old world tend to follow you to the new. Niko’s is a bleak life with fleeting moments of triumph and fleeting moments of levity, and his Liberty City reflects this in every flaking piece of paint and every particle of billowing trash. GTA 4 sticks resolutely and defiantly to its aesthetic of grime and decay in much the same way the underrated shooter Kane and Lynch 2: Dog Days did, in sending the player into an unwaveringly grim handicam snuff film and revelling in their discomfort. Both games are miraculous works of art. Plus in GTA 4, the stockmarket is called BAWSAQ, which is funny. -Jim Sekiro: Shadows Die Twice, PS4 Here's Aoife sharing in some of Ed's Sekiro frustration.Watch on YouTube I don't think I've ever been as angry as when I play Sekiro. I'm not just talking about being a bit frustrated. I'm talking 'existential why the hell am I doing this to myself' despondency. I am not enjoying it, but I can't stop playing it. I know I shouldn't let it get to me. Get a grip Ed, it's just a silly little video game. I should really just learn to git gud, right? But: sigh. For context, this is the last big FromSoftware game I'm yet to finish, and I've started it three times now. I'm determined to finish it - I've come too far with these games to stop now. But Sekiro just hasn't clicked for me like the studio's other games have. In part that's down to aesthetics, I think, as I just vibe more with the dark fantasy of Souls and twisted Gothism of Bloodborne than I do the Japanese horror of Sekiro. But also it's to do with combat. It's so focused on a single method of fighting - parry parry parry - that there's no room for the expression or build variety that I really like. I do enjoy how rhythmical parrying can be, but each boss encounter feels like I'm banging my head against a wall, much more so than any other game of this type. At least the end is in sight as I only have the final boss to go (I'm ignoring the Demon of Hatred for the moment). At this point I'm just playing Sekiro out of stubbornness and spite, and I'm not sure what to be disappointed in, the game or myself. -Ed
    0 Commentarios 0 Acciones
  • The first US hub for experimental medical treatments is coming

    A bill that allows medical clinics to sell unproven treatments has been passed in Montana. 

    Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administrationto their patients. Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. 

    The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—the initial, generally small, first-in-human studies that are designed to check that a new treatment is not harmful. These trials do not determine if the drug is effective.

    The bill, which was passed by the state legislature on April 29 and is expected to be signed by Governor Greg Gianforte, essentially expands on existing Right to Try legislation in the state. But while that law was originally designed to allow terminally ill people to access experimental drugs, the new bill was drafted and lobbied for by people interested in extending human lifespans—a group of longevity enthusiasts that includes scientists, libertarians, and influencers.  

    These longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. “I see no reason why it couldn’t be adopted by most of the other states,” said Todd White, speaking to an audience of policymakers and others interested in longevity at an event late last month in Washington, DC. White, who helped develop the bill and directs a research organization focused on aging, added that “there are some things that can be done at the federal level to allow Right to Try laws to proliferate more readily.” 

    Supporters of the bill say it gives individuals the freedom to make choices about their own bodies. At the same event, bioethicist Jessica Flanigan of the University of Richmond said she was “optimistic” about the measure, because “it’s great any time anybody is trying to give people back their medical autonomy.” 

    Ultimately, they hope that the new law will enable people to try unproven drugs that might help them live longer, make it easier for Americans to try experimental treatments without having to travel abroad, and potentially turn Montana into a medical tourism hub.

    But ethicists and legal scholars aren’t as optimistic. “I hate it,” bioethicist Alison Bateman-House of New York University says of the bill. She and others are worried about the ethics of promoting and selling unproven treatments—and the risks of harm should something go wrong.

    Easy access?

    No drugs have been approved to treat human aging. Some in the longevity field believe that regulation has held back the development of such drugs. In the US, federal law requires that drugs be shown to be both safe and effective before they can be sold. That requirement was made law in the 1960s following the thalidomide tragedy, in which women who took the drug for morning sickness had babies with sometimes severe disabilities. Since then, the FDA has been responsible for the approval of new drugs.  

    Typically, new drugs are put through a series of human trials. Phase I trials generally involve between 20 and 100 volunteers and are designed to check that the drug is safe for humans. If it is, the drug is then tested in larger groups of hundreds, and then thousands, of volunteers to assess the dose and whether it actually works. Once a drug is approved, people who are prescribed it are monitored for side effects. The entire process is slow, and it can last more than a decade—a particular pain point for people who are acutely aware of their own aging. 

    But some exceptions have been made for people who are terminally ill under Right to Try laws. Those laws allow certain individuals to apply for access to experimental treatments that have been through phase I clinical trials but have not received FDA approval.

    Montana first passed a Right to Try law in 2015. Then in 2023, the state expanded the law to include all patients there, not just those with terminal illnesses—meaning that any person in Montana could, in theory, take a drug that had been through only a phase I trial.

    At the time, this was cheered by many longevity enthusiasts—some of whom had helped craft the expanded measure.

    But practically, the change hasn’t worked out as they envisioned. “There was no licensing, no processing, no registration” for clinics that might want to offer those drugs, says White. “There needed to be another bill that provided regulatory clarity for service providers.” 

    So the new legislation addresses “how clinics can set up shop in Montana,” says Dylan Livingston, founder and CEO of the Alliance for Longevity Initiatives, which hosted the DC event. Livingston built A4LI, as it’s known, a few years ago, as a lobbying group for the science of human aging and longevity.

    Livingston, who is exploring multiple approaches to improve both funding for scientific research and to change drug regulation, helped develop and push the 2023 bill in Montana with the support of State Senator Kenneth Bogner, he says. “I gavea menu of things that could be done at the state level … and he loved the idea” of turning Montana into a medical tourism hub, he says. 

    After all, as things stand, plenty of Americans travel abroad to receive experimental treatments that cannot legally be sold in the US, including expensive, unproven stem cell and gene therapies, says Livingston. 

    “If you’re going to go and get an experimental gene therapy, you might as well keep it in the country,” he says. Livingston has suggested that others might be interested in trying a novel drug designed to clear aged “senescent” cells from the body, which is currently entering phase II trials for an eye condition caused by diabetes. “One: let’s keep the money in the country, and two: if I was a millionaire getting an experimental gene therapy, I’d rather be in Montana than Honduras.”

    “Los Alamos for longevity”

    Honduras, in particular, has become something of a home base for longevity experiments. The island of Roatán is home to the Global Alliance for Regenerative Medicine clinic, which, along with various stem cell products, sells a controversial unproven “anti-aging” gene therapy for around to customers including wealthy longevity influencer Bryan Johnson. 

    Tech entrepreneur and longevity enthusiast Niklas Anzinger has also founded the city of Infinita in the region’s special economic zone of Próspera, a private city where residents are able to make their own suggestions for medical regulations. It’s the second time he’s built a community there as part of his effort to build a “Los Alamos for longevity” on the island, a place where biotech companies can develop therapies that slow or reverse human aging “at warp speed,” and where individuals are free to take those experimental treatments. 

    Anzinger collaborated with White, the longevity enthusiast who spoke at the A4LI event, to help put together the new Montana bill. “He asked if I would help him try to advance the new bill, so that’s what we did for the last few months,” says White, who trained as an electrical engineer but left his career in telecommunications to work with an organization that uses blockchain to fund research into extending human lifespans. 

    “Right to Try has always been this thingwho are terminaland trying a Hail Mary approach to solving these things; now Right to Try laws are being used to allow you to access treatments earlier,” White told the audience at the A4LI event. “Making it so that people can use longevity medicines earlier is, I think, a very important thing.”

    The new bill largely sets out the “infrastructure” for clinics that want to sell experimental treatments, says White. It states that clinics will need to have a license, for example, and that this must be renewed on an annual basis. 

    “Now somebody who actually wants to deliver drugs under the Right to Try law will be able to do so,” he says. The new legislation also protects prescribing doctors from disciplinary action.

    And it sets out requirements for informed consent that go further than those of existing Right to Try laws. Before a person takes an experimental drug under the new law, they will be required to provide a written consent that includes a list of approved alternative drugs and a description of the worst potential outcome.

    On the safe side

    “In the Montana law, we explicitly enhanced the requirements for informed consent,” Anzinger told an audience at the same A4LI event. This, along with the fact that the treatments will have been through phase I clinical trials, will help to keep people safe, he argued. “We have to treat this with a very large degree of responsibility,” he added.

    “We obviously don’t want to be killing people,” says Livingston. 

    But he also adds that he, personally, won’t be signing up for any experimental treatments. “I want to be the 10 millionth, or even the 50 millionth, person to get the gene therapy,” he says. “I’m not that adventurous … I’ll let other people go first.”

    Others are indeed concerned that, for the “adventurous” people, these experimental treatments won’t necessarily be safe. Phase I trials are typically tiny, and they often involve less than 50 people, all of whom are typically in good health. A trial like that won’t tell you much about side effects that only show up in 5% of people, for example, or about interactions the drug might have with other medicines.

    Around 90% of drug candidates in clinical trials fail. And around 17% of drugs fail late-stage clinical trials because of safety concerns. Even those that make it all the way through clinical trials and get approved by the FDA can still end up being withdrawn from the market when rare but serious side effects show up. Between 1992 and 2023, 23 drugs that were given accelerated approval for cancer indications were later withdrawn from the market. And between 1950 and 2013, the reason for the withdrawal of 95 drugs was “death.”

    “It’s disturbing that they want to make drugs available after phase I testing,” says Sharona Hoffman, professor of law and bioethics at Case Western Reserve University in Cleveland, Ohio. “This could endanger patients.”

    “Famously, the doctor’s first obligation is to first do no harm,” says Bateman-House. “Ifhas not been through clinical trials, how do you have any standing on which to think it isn’t going to do any harm?”

    But supporters of the bill argue that individuals can make their own decisions about risk. When speaking at the A4LI event, Flanigan introduced herself as a bioethicist before adding “but don’t hold it against me; we’re not all so bad.” She argued that current drug regulations impose a “massive amount of restrictions on your bodily rights and your medical freedom.” Why should public officials be the ones making decisions about what’s safe for people? Individuals, she argued, should be empowered to make those judgments themselves.

    Other ethicists counter that this isn’t an issue of people’s rights. There are lots of generally accepted laws about when we can access drugs, says Hoffman; people aren’t allowed to drink and drive because they might kill someone. “So, no, you don’t have a right to ingest everything you want if there are risks associated with it.”

    The idea that individuals have a right to access experimental treatments has in fact failed in US courts in the past, says Carl Coleman, a bioethicist and legal scholar at Seton Hall in New Jersey. 

    He points to a case from 20 years ago: In the early 2000s, Frank Burroughs founded the Abigail Alliance for Better Access to Developmental Drugs. His daughter, Abigail Burroughs, had head and neck cancer, and she had tried and failed to access experimental drugs. In 2003, about two years after Abigail’s death, the group sued the FDA, arguing that people with terminal cancer have a constitutionally protected right to access experimental, unapproved treatments, once those treatments have been through phase I trials. In 2007, however, a court rejected that argument, determining  that terminally ill individuals do not have a constitutional right to experimental drugs.

    Bateman-House also questions a provision in the Montana bill that claims to make treatments more equitable. It states that “experimental treatment centers” should allocate 2% of their net annual profits “to support access to experimental treatments and healthcare for qualifying Montana residents.” Bateman-House says she’s never seen that kind of language in a bill before. It may sound positive, but it could in practice introduce even more risk to the local community. “On the one hand, I like equity,” she says. “On the other hand, I don’t like equity to snake oil.”

    After all, the doctors prescribing these drugs won’t know if they will work. It is never ethical to make somebody pay for a treatment when you don’t have any idea whether it will work, Bateman-House adds. “That’s how the US system has been structured: There’s no profit without evidence of safety and efficacy.”

    The clinics are coming

    Any clinics that offer experimental treatments in Montana will only be allowed to sell drugs that have been made within the state, says Coleman. “Federal law requires any drug that is going to be distributed in interstate commerce to have FDA approval,” he says.

    White isn’t too worried about that. Montana already has manufacturing facilities for biotech and pharmaceutical companies, including Pfizer. “That was one of the specific advantageson Montana, because everything can be done in state,” he says. He also believes that the current administration is “predisposed” to change federal laws around interstate drug manufacturing.At any rate, the clinics are coming to Montana, says Livingston. “We have half a dozen that are interested, and maybe two or three that are definitively going to set up shop out there.” He won’t name names, but he says some of the interested clinicians already have clinics in the US, while others are abroad. 

    Mac Davis—founder and CEO of Minicircle, the company that developed the controversial “anti-aging” gene therapy—told MIT Technology Review he was “looking into it.”

    “I think this can be an opportunity for America and Montana to really kind of corner the market when it comes to medical tourism,” says Livingston. “There is no other place in the world with this sort of regulatory environment.”
    #first #hub #experimental #medical #treatments
    The first US hub for experimental medical treatments is coming
    A bill that allows medical clinics to sell unproven treatments has been passed in Montana.  Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administrationto their patients. Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested.  The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—the initial, generally small, first-in-human studies that are designed to check that a new treatment is not harmful. These trials do not determine if the drug is effective. The bill, which was passed by the state legislature on April 29 and is expected to be signed by Governor Greg Gianforte, essentially expands on existing Right to Try legislation in the state. But while that law was originally designed to allow terminally ill people to access experimental drugs, the new bill was drafted and lobbied for by people interested in extending human lifespans—a group of longevity enthusiasts that includes scientists, libertarians, and influencers.   These longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. “I see no reason why it couldn’t be adopted by most of the other states,” said Todd White, speaking to an audience of policymakers and others interested in longevity at an event late last month in Washington, DC. White, who helped develop the bill and directs a research organization focused on aging, added that “there are some things that can be done at the federal level to allow Right to Try laws to proliferate more readily.”  Supporters of the bill say it gives individuals the freedom to make choices about their own bodies. At the same event, bioethicist Jessica Flanigan of the University of Richmond said she was “optimistic” about the measure, because “it’s great any time anybody is trying to give people back their medical autonomy.”  Ultimately, they hope that the new law will enable people to try unproven drugs that might help them live longer, make it easier for Americans to try experimental treatments without having to travel abroad, and potentially turn Montana into a medical tourism hub. But ethicists and legal scholars aren’t as optimistic. “I hate it,” bioethicist Alison Bateman-House of New York University says of the bill. She and others are worried about the ethics of promoting and selling unproven treatments—and the risks of harm should something go wrong. Easy access? No drugs have been approved to treat human aging. Some in the longevity field believe that regulation has held back the development of such drugs. In the US, federal law requires that drugs be shown to be both safe and effective before they can be sold. That requirement was made law in the 1960s following the thalidomide tragedy, in which women who took the drug for morning sickness had babies with sometimes severe disabilities. Since then, the FDA has been responsible for the approval of new drugs.   Typically, new drugs are put through a series of human trials. Phase I trials generally involve between 20 and 100 volunteers and are designed to check that the drug is safe for humans. If it is, the drug is then tested in larger groups of hundreds, and then thousands, of volunteers to assess the dose and whether it actually works. Once a drug is approved, people who are prescribed it are monitored for side effects. The entire process is slow, and it can last more than a decade—a particular pain point for people who are acutely aware of their own aging.  But some exceptions have been made for people who are terminally ill under Right to Try laws. Those laws allow certain individuals to apply for access to experimental treatments that have been through phase I clinical trials but have not received FDA approval. Montana first passed a Right to Try law in 2015. Then in 2023, the state expanded the law to include all patients there, not just those with terminal illnesses—meaning that any person in Montana could, in theory, take a drug that had been through only a phase I trial. At the time, this was cheered by many longevity enthusiasts—some of whom had helped craft the expanded measure. But practically, the change hasn’t worked out as they envisioned. “There was no licensing, no processing, no registration” for clinics that might want to offer those drugs, says White. “There needed to be another bill that provided regulatory clarity for service providers.”  So the new legislation addresses “how clinics can set up shop in Montana,” says Dylan Livingston, founder and CEO of the Alliance for Longevity Initiatives, which hosted the DC event. Livingston built A4LI, as it’s known, a few years ago, as a lobbying group for the science of human aging and longevity. Livingston, who is exploring multiple approaches to improve both funding for scientific research and to change drug regulation, helped develop and push the 2023 bill in Montana with the support of State Senator Kenneth Bogner, he says. “I gavea menu of things that could be done at the state level … and he loved the idea” of turning Montana into a medical tourism hub, he says.  After all, as things stand, plenty of Americans travel abroad to receive experimental treatments that cannot legally be sold in the US, including expensive, unproven stem cell and gene therapies, says Livingston.  “If you’re going to go and get an experimental gene therapy, you might as well keep it in the country,” he says. Livingston has suggested that others might be interested in trying a novel drug designed to clear aged “senescent” cells from the body, which is currently entering phase II trials for an eye condition caused by diabetes. “One: let’s keep the money in the country, and two: if I was a millionaire getting an experimental gene therapy, I’d rather be in Montana than Honduras.” “Los Alamos for longevity” Honduras, in particular, has become something of a home base for longevity experiments. The island of Roatán is home to the Global Alliance for Regenerative Medicine clinic, which, along with various stem cell products, sells a controversial unproven “anti-aging” gene therapy for around to customers including wealthy longevity influencer Bryan Johnson.  Tech entrepreneur and longevity enthusiast Niklas Anzinger has also founded the city of Infinita in the region’s special economic zone of Próspera, a private city where residents are able to make their own suggestions for medical regulations. It’s the second time he’s built a community there as part of his effort to build a “Los Alamos for longevity” on the island, a place where biotech companies can develop therapies that slow or reverse human aging “at warp speed,” and where individuals are free to take those experimental treatments.  Anzinger collaborated with White, the longevity enthusiast who spoke at the A4LI event, to help put together the new Montana bill. “He asked if I would help him try to advance the new bill, so that’s what we did for the last few months,” says White, who trained as an electrical engineer but left his career in telecommunications to work with an organization that uses blockchain to fund research into extending human lifespans.  “Right to Try has always been this thingwho are terminaland trying a Hail Mary approach to solving these things; now Right to Try laws are being used to allow you to access treatments earlier,” White told the audience at the A4LI event. “Making it so that people can use longevity medicines earlier is, I think, a very important thing.” The new bill largely sets out the “infrastructure” for clinics that want to sell experimental treatments, says White. It states that clinics will need to have a license, for example, and that this must be renewed on an annual basis.  “Now somebody who actually wants to deliver drugs under the Right to Try law will be able to do so,” he says. The new legislation also protects prescribing doctors from disciplinary action. And it sets out requirements for informed consent that go further than those of existing Right to Try laws. Before a person takes an experimental drug under the new law, they will be required to provide a written consent that includes a list of approved alternative drugs and a description of the worst potential outcome. On the safe side “In the Montana law, we explicitly enhanced the requirements for informed consent,” Anzinger told an audience at the same A4LI event. This, along with the fact that the treatments will have been through phase I clinical trials, will help to keep people safe, he argued. “We have to treat this with a very large degree of responsibility,” he added. “We obviously don’t want to be killing people,” says Livingston.  But he also adds that he, personally, won’t be signing up for any experimental treatments. “I want to be the 10 millionth, or even the 50 millionth, person to get the gene therapy,” he says. “I’m not that adventurous … I’ll let other people go first.” Others are indeed concerned that, for the “adventurous” people, these experimental treatments won’t necessarily be safe. Phase I trials are typically tiny, and they often involve less than 50 people, all of whom are typically in good health. A trial like that won’t tell you much about side effects that only show up in 5% of people, for example, or about interactions the drug might have with other medicines. Around 90% of drug candidates in clinical trials fail. And around 17% of drugs fail late-stage clinical trials because of safety concerns. Even those that make it all the way through clinical trials and get approved by the FDA can still end up being withdrawn from the market when rare but serious side effects show up. Between 1992 and 2023, 23 drugs that were given accelerated approval for cancer indications were later withdrawn from the market. And between 1950 and 2013, the reason for the withdrawal of 95 drugs was “death.” “It’s disturbing that they want to make drugs available after phase I testing,” says Sharona Hoffman, professor of law and bioethics at Case Western Reserve University in Cleveland, Ohio. “This could endanger patients.” “Famously, the doctor’s first obligation is to first do no harm,” says Bateman-House. “Ifhas not been through clinical trials, how do you have any standing on which to think it isn’t going to do any harm?” But supporters of the bill argue that individuals can make their own decisions about risk. When speaking at the A4LI event, Flanigan introduced herself as a bioethicist before adding “but don’t hold it against me; we’re not all so bad.” She argued that current drug regulations impose a “massive amount of restrictions on your bodily rights and your medical freedom.” Why should public officials be the ones making decisions about what’s safe for people? Individuals, she argued, should be empowered to make those judgments themselves. Other ethicists counter that this isn’t an issue of people’s rights. There are lots of generally accepted laws about when we can access drugs, says Hoffman; people aren’t allowed to drink and drive because they might kill someone. “So, no, you don’t have a right to ingest everything you want if there are risks associated with it.” The idea that individuals have a right to access experimental treatments has in fact failed in US courts in the past, says Carl Coleman, a bioethicist and legal scholar at Seton Hall in New Jersey.  He points to a case from 20 years ago: In the early 2000s, Frank Burroughs founded the Abigail Alliance for Better Access to Developmental Drugs. His daughter, Abigail Burroughs, had head and neck cancer, and she had tried and failed to access experimental drugs. In 2003, about two years after Abigail’s death, the group sued the FDA, arguing that people with terminal cancer have a constitutionally protected right to access experimental, unapproved treatments, once those treatments have been through phase I trials. In 2007, however, a court rejected that argument, determining  that terminally ill individuals do not have a constitutional right to experimental drugs. Bateman-House also questions a provision in the Montana bill that claims to make treatments more equitable. It states that “experimental treatment centers” should allocate 2% of their net annual profits “to support access to experimental treatments and healthcare for qualifying Montana residents.” Bateman-House says she’s never seen that kind of language in a bill before. It may sound positive, but it could in practice introduce even more risk to the local community. “On the one hand, I like equity,” she says. “On the other hand, I don’t like equity to snake oil.” After all, the doctors prescribing these drugs won’t know if they will work. It is never ethical to make somebody pay for a treatment when you don’t have any idea whether it will work, Bateman-House adds. “That’s how the US system has been structured: There’s no profit without evidence of safety and efficacy.” The clinics are coming Any clinics that offer experimental treatments in Montana will only be allowed to sell drugs that have been made within the state, says Coleman. “Federal law requires any drug that is going to be distributed in interstate commerce to have FDA approval,” he says. White isn’t too worried about that. Montana already has manufacturing facilities for biotech and pharmaceutical companies, including Pfizer. “That was one of the specific advantageson Montana, because everything can be done in state,” he says. He also believes that the current administration is “predisposed” to change federal laws around interstate drug manufacturing.At any rate, the clinics are coming to Montana, says Livingston. “We have half a dozen that are interested, and maybe two or three that are definitively going to set up shop out there.” He won’t name names, but he says some of the interested clinicians already have clinics in the US, while others are abroad.  Mac Davis—founder and CEO of Minicircle, the company that developed the controversial “anti-aging” gene therapy—told MIT Technology Review he was “looking into it.” “I think this can be an opportunity for America and Montana to really kind of corner the market when it comes to medical tourism,” says Livingston. “There is no other place in the world with this sort of regulatory environment.” #first #hub #experimental #medical #treatments
    WWW.TECHNOLOGYREVIEW.COM
    The first US hub for experimental medical treatments is coming
    A bill that allows medical clinics to sell unproven treatments has been passed in Montana.  Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administration (FDA) to their patients. Once it’s signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested.  The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials—the initial, generally small, first-in-human studies that are designed to check that a new treatment is not harmful. These trials do not determine if the drug is effective. The bill, which was passed by the state legislature on April 29 and is expected to be signed by Governor Greg Gianforte, essentially expands on existing Right to Try legislation in the state. But while that law was originally designed to allow terminally ill people to access experimental drugs, the new bill was drafted and lobbied for by people interested in extending human lifespans—a group of longevity enthusiasts that includes scientists, libertarians, and influencers.   These longevity enthusiasts are hoping Montana will serve as a test bed for opening up access to experimental drugs. “I see no reason why it couldn’t be adopted by most of the other states,” said Todd White, speaking to an audience of policymakers and others interested in longevity at an event late last month in Washington, DC. White, who helped develop the bill and directs a research organization focused on aging, added that “there are some things that can be done at the federal level to allow Right to Try laws to proliferate more readily.”  Supporters of the bill say it gives individuals the freedom to make choices about their own bodies. At the same event, bioethicist Jessica Flanigan of the University of Richmond said she was “optimistic” about the measure, because “it’s great any time anybody is trying to give people back their medical autonomy.”  Ultimately, they hope that the new law will enable people to try unproven drugs that might help them live longer, make it easier for Americans to try experimental treatments without having to travel abroad, and potentially turn Montana into a medical tourism hub. But ethicists and legal scholars aren’t as optimistic. “I hate it,” bioethicist Alison Bateman-House of New York University says of the bill. She and others are worried about the ethics of promoting and selling unproven treatments—and the risks of harm should something go wrong. Easy access? No drugs have been approved to treat human aging. Some in the longevity field believe that regulation has held back the development of such drugs. In the US, federal law requires that drugs be shown to be both safe and effective before they can be sold. That requirement was made law in the 1960s following the thalidomide tragedy, in which women who took the drug for morning sickness had babies with sometimes severe disabilities. Since then, the FDA has been responsible for the approval of new drugs.   Typically, new drugs are put through a series of human trials. Phase I trials generally involve between 20 and 100 volunteers and are designed to check that the drug is safe for humans. If it is, the drug is then tested in larger groups of hundreds, and then thousands, of volunteers to assess the dose and whether it actually works. Once a drug is approved, people who are prescribed it are monitored for side effects. The entire process is slow, and it can last more than a decade—a particular pain point for people who are acutely aware of their own aging.  But some exceptions have been made for people who are terminally ill under Right to Try laws. Those laws allow certain individuals to apply for access to experimental treatments that have been through phase I clinical trials but have not received FDA approval. Montana first passed a Right to Try law in 2015 (a federal law was passed around three years later). Then in 2023, the state expanded the law to include all patients there, not just those with terminal illnesses—meaning that any person in Montana could, in theory, take a drug that had been through only a phase I trial. At the time, this was cheered by many longevity enthusiasts—some of whom had helped craft the expanded measure. But practically, the change hasn’t worked out as they envisioned. “There was no licensing, no processing, no registration” for clinics that might want to offer those drugs, says White. “There needed to be another bill that provided regulatory clarity for service providers.”  So the new legislation addresses “how clinics can set up shop in Montana,” says Dylan Livingston, founder and CEO of the Alliance for Longevity Initiatives, which hosted the DC event. Livingston built A4LI, as it’s known, a few years ago, as a lobbying group for the science of human aging and longevity. Livingston, who is exploring multiple approaches to improve both funding for scientific research and to change drug regulation, helped develop and push the 2023 bill in Montana with the support of State Senator Kenneth Bogner, he says. “I gave [Bogner] a menu of things that could be done at the state level … and he loved the idea” of turning Montana into a medical tourism hub, he says.  After all, as things stand, plenty of Americans travel abroad to receive experimental treatments that cannot legally be sold in the US, including expensive, unproven stem cell and gene therapies, says Livingston.  “If you’re going to go and get an experimental gene therapy, you might as well keep it in the country,” he says. Livingston has suggested that others might be interested in trying a novel drug designed to clear aged “senescent” cells from the body, which is currently entering phase II trials for an eye condition caused by diabetes. “One: let’s keep the money in the country, and two: if I was a millionaire getting an experimental gene therapy, I’d rather be in Montana than Honduras.” “Los Alamos for longevity” Honduras, in particular, has become something of a home base for longevity experiments. The island of Roatán is home to the Global Alliance for Regenerative Medicine clinic, which, along with various stem cell products, sells a controversial unproven “anti-aging” gene therapy for around $20,000 to customers including wealthy longevity influencer Bryan Johnson.  Tech entrepreneur and longevity enthusiast Niklas Anzinger has also founded the city of Infinita in the region’s special economic zone of Próspera, a private city where residents are able to make their own suggestions for medical regulations. It’s the second time he’s built a community there as part of his effort to build a “Los Alamos for longevity” on the island, a place where biotech companies can develop therapies that slow or reverse human aging “at warp speed,” and where individuals are free to take those experimental treatments. (The first community, Vitalia, featured a biohacking lab, but came to an end following a disagreement between the two founders.)  Anzinger collaborated with White, the longevity enthusiast who spoke at the A4LI event (and is an advisor to Infinita VC, Anzinger’s investment company), to help put together the new Montana bill. “He asked if I would help him try to advance the new bill, so that’s what we did for the last few months,” says White, who trained as an electrical engineer but left his career in telecommunications to work with an organization that uses blockchain to fund research into extending human lifespans.  “Right to Try has always been this thing [for people] who are terminal[ly ill] and trying a Hail Mary approach to solving these things; now Right to Try laws are being used to allow you to access treatments earlier,” White told the audience at the A4LI event. “Making it so that people can use longevity medicines earlier is, I think, a very important thing.” The new bill largely sets out the “infrastructure” for clinics that want to sell experimental treatments, says White. It states that clinics will need to have a license, for example, and that this must be renewed on an annual basis.  “Now somebody who actually wants to deliver drugs under the Right to Try law will be able to do so,” he says. The new legislation also protects prescribing doctors from disciplinary action. And it sets out requirements for informed consent that go further than those of existing Right to Try laws. Before a person takes an experimental drug under the new law, they will be required to provide a written consent that includes a list of approved alternative drugs and a description of the worst potential outcome. On the safe side “In the Montana law, we explicitly enhanced the requirements for informed consent,” Anzinger told an audience at the same A4LI event. This, along with the fact that the treatments will have been through phase I clinical trials, will help to keep people safe, he argued. “We have to treat this with a very large degree of responsibility,” he added. “We obviously don’t want to be killing people,” says Livingston.  But he also adds that he, personally, won’t be signing up for any experimental treatments. “I want to be the 10 millionth, or even the 50 millionth, person to get the gene therapy,” he says. “I’m not that adventurous … I’ll let other people go first.” Others are indeed concerned that, for the “adventurous” people, these experimental treatments won’t necessarily be safe. Phase I trials are typically tiny, and they often involve less than 50 people, all of whom are typically in good health. A trial like that won’t tell you much about side effects that only show up in 5% of people, for example, or about interactions the drug might have with other medicines. Around 90% of drug candidates in clinical trials fail. And around 17% of drugs fail late-stage clinical trials because of safety concerns. Even those that make it all the way through clinical trials and get approved by the FDA can still end up being withdrawn from the market when rare but serious side effects show up. Between 1992 and 2023, 23 drugs that were given accelerated approval for cancer indications were later withdrawn from the market. And between 1950 and 2013, the reason for the withdrawal of 95 drugs was “death.” “It’s disturbing that they want to make drugs available after phase I testing,” says Sharona Hoffman, professor of law and bioethics at Case Western Reserve University in Cleveland, Ohio. “This could endanger patients.” “Famously, the doctor’s first obligation is to first do no harm,” says Bateman-House. “If [a drug] has not been through clinical trials, how do you have any standing on which to think it isn’t going to do any harm?” But supporters of the bill argue that individuals can make their own decisions about risk. When speaking at the A4LI event, Flanigan introduced herself as a bioethicist before adding “but don’t hold it against me; we’re not all so bad.” She argued that current drug regulations impose a “massive amount of restrictions on your bodily rights and your medical freedom.” Why should public officials be the ones making decisions about what’s safe for people? Individuals, she argued, should be empowered to make those judgments themselves. Other ethicists counter that this isn’t an issue of people’s rights. There are lots of generally accepted laws about when we can access drugs, says Hoffman; people aren’t allowed to drink and drive because they might kill someone. “So, no, you don’t have a right to ingest everything you want if there are risks associated with it.” The idea that individuals have a right to access experimental treatments has in fact failed in US courts in the past, says Carl Coleman, a bioethicist and legal scholar at Seton Hall in New Jersey.  He points to a case from 20 years ago: In the early 2000s, Frank Burroughs founded the Abigail Alliance for Better Access to Developmental Drugs. His daughter, Abigail Burroughs, had head and neck cancer, and she had tried and failed to access experimental drugs. In 2003, about two years after Abigail’s death, the group sued the FDA, arguing that people with terminal cancer have a constitutionally protected right to access experimental, unapproved treatments, once those treatments have been through phase I trials. In 2007, however, a court rejected that argument, determining  that terminally ill individuals do not have a constitutional right to experimental drugs. Bateman-House also questions a provision in the Montana bill that claims to make treatments more equitable. It states that “experimental treatment centers” should allocate 2% of their net annual profits “to support access to experimental treatments and healthcare for qualifying Montana residents.” Bateman-House says she’s never seen that kind of language in a bill before. It may sound positive, but it could in practice introduce even more risk to the local community. “On the one hand, I like equity,” she says. “On the other hand, I don’t like equity to snake oil.” After all, the doctors prescribing these drugs won’t know if they will work. It is never ethical to make somebody pay for a treatment when you don’t have any idea whether it will work, Bateman-House adds. “That’s how the US system has been structured: There’s no profit without evidence of safety and efficacy.” The clinics are coming Any clinics that offer experimental treatments in Montana will only be allowed to sell drugs that have been made within the state, says Coleman. “Federal law requires any drug that is going to be distributed in interstate commerce to have FDA approval,” he says. White isn’t too worried about that. Montana already has manufacturing facilities for biotech and pharmaceutical companies, including Pfizer. “That was one of the specific advantages [of focusing] on Montana, because everything can be done in state,” he says. He also believes that the current administration is “predisposed” to change federal laws around interstate drug manufacturing. (FDA commissioner Marty Makary has been a vocal critic of the agency and the pace at which it approves new drugs.) At any rate, the clinics are coming to Montana, says Livingston. “We have half a dozen that are interested, and maybe two or three that are definitively going to set up shop out there.” He won’t name names, but he says some of the interested clinicians already have clinics in the US, while others are abroad.  Mac Davis—founder and CEO of Minicircle, the company that developed the controversial “anti-aging” gene therapy—told MIT Technology Review he was “looking into it.” “I think this can be an opportunity for America and Montana to really kind of corner the market when it comes to medical tourism,” says Livingston. “There is no other place in the world with this sort of regulatory environment.”
    0 Commentarios 0 Acciones
  • #333;">How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con.
    It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us.
    Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI.
    Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence.
    It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically.
    "It didn't spring whole cloth out of Zeus's head or anything.
    This has a longer history," Hanna said in an interview with CNET.
    "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development.
    The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing.
    And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development.
    Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s.
    Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon.
    Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money.
    But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below.
    The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype.
    An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading.
    AI chatbots aren't capable of seeing of thinking because they don't have brains.
    Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language.
    We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said.
    "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say.
    "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said.
    "And it is very hard to remind ourselves that the mind isn't there.
    It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators.
    It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything.
    AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers.
    As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it.
    "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said.
    In "certain domains, like pattern matching at scale, computers are quite good at that.
    But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence.
    Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks.
    There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction.
    Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios.
    The boosters imagine an AI-powered futuristic society.
    The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable.
    "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said.
    "And then there's this claim that this particular technology is a step on that path, and it's all marketing.
    It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors.
    Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals.
    For better or worse, life is not science fiction.
    Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
    Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates.
    Many AI companies won't tell you what content is used to train their models.
    But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors.
    That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained.
    There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm.
    "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said.
    Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness.
    Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag.
    "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed.
    But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information.
    For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    #0066cc;">#how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    المصدر: www.cnet.com
    #how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    WWW.CNET.COM
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    0 Commentarios 0 Acciones