• The recent buzz around the designer's ‘fixed’ Marvel logo is a fascinating reminder of how art can evoke strong feelings! While some fans are enraged, it's important to celebrate the passion behind our favorite designs and characters. Every piece of art sparks a conversation, and that’s what makes our community vibrant and alive!

    Let’s channel that energy into positivity and creativity! Embrace the diversity of opinions and remember, whether we love it or not, every change can lead to new ideas and perspectives. Keep shining and sharing your thoughts!

    #Marvel #ArtDiscussion #Creativity #Positivity #Community
    The recent buzz around the designer's ‘fixed’ Marvel logo is a fascinating reminder of how art can evoke strong feelings! 🌟 While some fans are enraged, it's important to celebrate the passion behind our favorite designs and characters. Every piece of art sparks a conversation, and that’s what makes our community vibrant and alive! 💖 Let’s channel that energy into positivity and creativity! Embrace the diversity of opinions and remember, whether we love it or not, every change can lead to new ideas and perspectives. Keep shining and sharing your thoughts! 🚀✨ #Marvel #ArtDiscussion #Creativity #Positivity #Community
    Like
    Love
    Wow
    Sad
    Angry
    34
    1 Comentários 0 Compartilhamentos 0 Anterior
  • Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity?

    Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable.

    The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre.

    And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion?

    What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland.

    It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes.

    Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better.

    #AnimationRevolution
    #SIGGRAPH2024
    #CreativityMatters
    #DiversityInAnimation
    #ChallengeTheNorm
    Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity? Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable. The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre. And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion? What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland. It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes. Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better. #AnimationRevolution #SIGGRAPH2024 #CreativityMatters #DiversityInAnimation #ChallengeTheNorm
    Projection gratuite : l’Electronic Theater du SIGGRAPH, le 17 avril !
    Vous n’étiez pas au SIGGRAPH l’été dernier ? Montpellier ACM SIGGRAPH a pensé à vous, et organise ce jeudi 17 avril une projection gratuite des projets sélectionnés dans l’Electronic Theater 2024, le festival d’animation du SI
    Like
    Love
    Wow
    Angry
    Sad
    625
    1 Comentários 0 Compartilhamentos 0 Anterior
  • The 2-year hunt for ‘one of the rarest games in history’

    Cosmology of Kyoto is a first-person horror exploration game where players navigate a deeply haunted yet surprisingly educational terrain. Originally released in 1993, Cosmology of Kyoto and its disturbing depictions of suffering have since become a cult classic. Roger Ebert, known hater, loved the game so much that he spent weeks playing it. Despite its acclaim, though, the game was a commercial failure and never got a sequel. At least, that’s what many people believed until now.

    In 2023, a game called TRIPITAKA 玄奘三蔵求法の旅 was listed on Yahoo Japan. The game was sold for to an unknown party who, despite embarking on a bidding war that culminated in hundreds of dollars, didn’t really share anything publicly about it. The transaction was originally noticed by Mark Buckner, who brought it up in a discussion between fans about the original eerie Japanese game.

    Though diehard aficionados had a suspicion that the Cosmology developers had considered a follow-up, concrete evidence of it was scant. The only apparent mention of a sequel lied in the resumes of two Cosmology producers, Hiroshi Ōnishi and Mori Kōichi. Fans also spotted mention of it in an old website for a 1999 museum exhibition on the Silk Road. Though it was a work of fiction, Cosmology was rooted in the history of 10th century Japan and provided players with an in-game encyclopedia. It would make sense for a potential sequel to have enough an educational focus worthy of a museum exhibition.

    Despite these rumblings, it was unclear if the game had ever been published, or how far into production it got. Knowledge of the auction prompted video game academic Bruno de Figueiredo to track down the auction winner. The hope was that whoever bought it might share a copy of the game online. After all, up until this point, few knew what this game was and its mere existence lay in doubt. But if it did exist, then it was obviously significant from a historical perspective. Fans would be eager to play it.

    But getting collectors to share copies of rare games is tricky. If a game is widely accessible, then it’s no longer rare. Holding on to a copy ensures that it retains its aura as a prized possession. Hoarding also means that the value of a game won’t drop — in fact, it might rise. Not all collectors see their possessions as commodities, though. Holding on to a culturally significant game might be motivated by the desire to preserve it for future generations, which is relevant in instances where a copy of a game is still sealed. Uploading a game that you did not develop is also likely to be legally dubious.

    In this case, the owner declined to share the game in a form that others could play. The collector did however upload an hour’s worth of footage on YouTube. The game was called TRIPITAKA, and though it did not outright classify itself as a sequel, the art style, historical focus, and slightly unnerving vibe placed TRIPITAKA in a similar realm as Cosmology of Kyoto. Fans considered it a spiritual successor. Cosmology itself had been developed with the help of Japanese museums.

    For some, it was enough to get more of a game they loved. Even if they couldn’t personally control the gameplay, the TRIPITAKA video was lengthy enough to give a sense of what the experience would be like. Others were enraged: Couldn’t the collector see how important this game was?

    “I cannot understate just how disgusted I am that this piece of culture and artisn’t being preserved and spread for the enjoyment of others,” one commenter on YouTube wrote. “Shame on you.”

    Undeterred by this roadblock, Bruno de Figueiredo continued his pursuit of TRIPITAKA. In 2025, his efforts bore fruit. On X, the expert on obscure Japanese games revealed that he had finally convinced the collector to share the game online after “years of appeals.” Figueiredo has since uploaded a playable ISO of the game online alongside a full three-hour playthrough of a game that had once been considered lost media.

    Figuerido did not respond to a request for comment. In a blog post, he emphasized the significance of this find by stating that “the importance of this footage could hardly be overstated.”

    He continued:

    I am delighted to have played a minor role in the unraveling of this thirty year old mystery, and can hardly contain my enthusiasm, as I now find myself equipped with sufficient information to produce a full post concerning a game about which I could not have written more than a sentence, just last year.

    Figuerido refers to TRIPITAKA as one of the rarest games ever made, and it’s true inasmuch as there appears to be only one known copy of it. Value and rarity are also fluid concepts that are ultimately determined by interested audiences. At the same time, TRIPITAKA’s fate and availability is shockingly ordinary when you consider how poorly the gaming industry preserves its own history. If the lack of care is evident with significant games that have arguable merit, it’s doubly true for average games. This is how a game with mixed reviews from twenty years ago suddenly starts commanding hundreds of dollars on resale sites; the scarcity happens because nobody felt a game was worth holding on to.

    “There are many extremely raregames for personal computers which, unlike consoles, don’t have any central control over who can publish a game, or what the minimum number of manufactured units needs to be,” says Frank Cifaldi, founder of the Video Game History foundation, a nonprofit dedicated to preserving video games. Cifaldi notes that games in the 80s and 90s in particular, some of which were self-published and never got widespread circulation to begin with, are particularly prone to the type of obscurity that can lead to only a single copy of a game.

    “I would further suspect that there were many games and multimedia objects from Japan during this era that are just as rare, but we don’t hear about them because of their lack of historical significance in the West,” Cifaldi says. “I would bet good money that if you surveyed the collection at the Game Preservation Society in Japan, you’d come up with dozens of ‘only known copies’ of 1980s microcomputer games.”
    #2year #hunt #one #rarest #games
    The 2-year hunt for ‘one of the rarest games in history’
    Cosmology of Kyoto is a first-person horror exploration game where players navigate a deeply haunted yet surprisingly educational terrain. Originally released in 1993, Cosmology of Kyoto and its disturbing depictions of suffering have since become a cult classic. Roger Ebert, known hater, loved the game so much that he spent weeks playing it. Despite its acclaim, though, the game was a commercial failure and never got a sequel. At least, that’s what many people believed until now. In 2023, a game called TRIPITAKA 玄奘三蔵求法の旅 was listed on Yahoo Japan. The game was sold for to an unknown party who, despite embarking on a bidding war that culminated in hundreds of dollars, didn’t really share anything publicly about it. The transaction was originally noticed by Mark Buckner, who brought it up in a discussion between fans about the original eerie Japanese game. Though diehard aficionados had a suspicion that the Cosmology developers had considered a follow-up, concrete evidence of it was scant. The only apparent mention of a sequel lied in the resumes of two Cosmology producers, Hiroshi Ōnishi and Mori Kōichi. Fans also spotted mention of it in an old website for a 1999 museum exhibition on the Silk Road. Though it was a work of fiction, Cosmology was rooted in the history of 10th century Japan and provided players with an in-game encyclopedia. It would make sense for a potential sequel to have enough an educational focus worthy of a museum exhibition. Despite these rumblings, it was unclear if the game had ever been published, or how far into production it got. Knowledge of the auction prompted video game academic Bruno de Figueiredo to track down the auction winner. The hope was that whoever bought it might share a copy of the game online. After all, up until this point, few knew what this game was and its mere existence lay in doubt. But if it did exist, then it was obviously significant from a historical perspective. Fans would be eager to play it. But getting collectors to share copies of rare games is tricky. If a game is widely accessible, then it’s no longer rare. Holding on to a copy ensures that it retains its aura as a prized possession. Hoarding also means that the value of a game won’t drop — in fact, it might rise. Not all collectors see their possessions as commodities, though. Holding on to a culturally significant game might be motivated by the desire to preserve it for future generations, which is relevant in instances where a copy of a game is still sealed. Uploading a game that you did not develop is also likely to be legally dubious. In this case, the owner declined to share the game in a form that others could play. The collector did however upload an hour’s worth of footage on YouTube. The game was called TRIPITAKA, and though it did not outright classify itself as a sequel, the art style, historical focus, and slightly unnerving vibe placed TRIPITAKA in a similar realm as Cosmology of Kyoto. Fans considered it a spiritual successor. Cosmology itself had been developed with the help of Japanese museums. For some, it was enough to get more of a game they loved. Even if they couldn’t personally control the gameplay, the TRIPITAKA video was lengthy enough to give a sense of what the experience would be like. Others were enraged: Couldn’t the collector see how important this game was? “I cannot understate just how disgusted I am that this piece of culture and artisn’t being preserved and spread for the enjoyment of others,” one commenter on YouTube wrote. “Shame on you.” Undeterred by this roadblock, Bruno de Figueiredo continued his pursuit of TRIPITAKA. In 2025, his efforts bore fruit. On X, the expert on obscure Japanese games revealed that he had finally convinced the collector to share the game online after “years of appeals.” Figueiredo has since uploaded a playable ISO of the game online alongside a full three-hour playthrough of a game that had once been considered lost media. Figuerido did not respond to a request for comment. In a blog post, he emphasized the significance of this find by stating that “the importance of this footage could hardly be overstated.” He continued: I am delighted to have played a minor role in the unraveling of this thirty year old mystery, and can hardly contain my enthusiasm, as I now find myself equipped with sufficient information to produce a full post concerning a game about which I could not have written more than a sentence, just last year. Figuerido refers to TRIPITAKA as one of the rarest games ever made, and it’s true inasmuch as there appears to be only one known copy of it. Value and rarity are also fluid concepts that are ultimately determined by interested audiences. At the same time, TRIPITAKA’s fate and availability is shockingly ordinary when you consider how poorly the gaming industry preserves its own history. If the lack of care is evident with significant games that have arguable merit, it’s doubly true for average games. This is how a game with mixed reviews from twenty years ago suddenly starts commanding hundreds of dollars on resale sites; the scarcity happens because nobody felt a game was worth holding on to. “There are many extremely raregames for personal computers which, unlike consoles, don’t have any central control over who can publish a game, or what the minimum number of manufactured units needs to be,” says Frank Cifaldi, founder of the Video Game History foundation, a nonprofit dedicated to preserving video games. Cifaldi notes that games in the 80s and 90s in particular, some of which were self-published and never got widespread circulation to begin with, are particularly prone to the type of obscurity that can lead to only a single copy of a game. “I would further suspect that there were many games and multimedia objects from Japan during this era that are just as rare, but we don’t hear about them because of their lack of historical significance in the West,” Cifaldi says. “I would bet good money that if you surveyed the collection at the Game Preservation Society in Japan, you’d come up with dozens of ‘only known copies’ of 1980s microcomputer games.” #2year #hunt #one #rarest #games
    WWW.POLYGON.COM
    The 2-year hunt for ‘one of the rarest games in history’
    Cosmology of Kyoto is a first-person horror exploration game where players navigate a deeply haunted yet surprisingly educational terrain. Originally released in 1993, Cosmology of Kyoto and its disturbing depictions of suffering have since become a cult classic. Roger Ebert, known hater, loved the game so much that he spent weeks playing it. Despite its acclaim, though, the game was a commercial failure and never got a sequel. At least, that’s what many people believed until now. In 2023, a game called TRIPITAKA 玄奘三蔵求法の旅 was listed on Yahoo Japan. The game was sold for $300 to an unknown party who, despite embarking on a bidding war that culminated in hundreds of dollars, didn’t really share anything publicly about it. The transaction was originally noticed by Mark Buckner, who brought it up in a discussion between fans about the original eerie Japanese game. Though diehard aficionados had a suspicion that the Cosmology developers had considered a follow-up, concrete evidence of it was scant. The only apparent mention of a sequel lied in the resumes of two Cosmology producers, Hiroshi Ōnishi and Mori Kōichi. Fans also spotted mention of it in an old website for a 1999 museum exhibition on the Silk Road. Though it was a work of fiction, Cosmology was rooted in the history of 10th century Japan and provided players with an in-game encyclopedia. It would make sense for a potential sequel to have enough an educational focus worthy of a museum exhibition. Despite these rumblings, it was unclear if the game had ever been published, or how far into production it got. Knowledge of the auction prompted video game academic Bruno de Figueiredo to track down the auction winner. The hope was that whoever bought it might share a copy of the game online. After all, up until this point, few knew what this game was and its mere existence lay in doubt. But if it did exist, then it was obviously significant from a historical perspective. Fans would be eager to play it. But getting collectors to share copies of rare games is tricky. If a game is widely accessible, then it’s no longer rare. Holding on to a copy ensures that it retains its aura as a prized possession. Hoarding also means that the value of a game won’t drop — in fact, it might rise. Not all collectors see their possessions as commodities, though. Holding on to a culturally significant game might be motivated by the desire to preserve it for future generations, which is relevant in instances where a copy of a game is still sealed. Uploading a game that you did not develop is also likely to be legally dubious. In this case, the owner declined to share the game in a form that others could play. The collector did however upload an hour’s worth of footage on YouTube. The game was called TRIPITAKA, and though it did not outright classify itself as a sequel, the art style, historical focus, and slightly unnerving vibe placed TRIPITAKA in a similar realm as Cosmology of Kyoto. Fans considered it a spiritual successor. Cosmology itself had been developed with the help of Japanese museums. For some, it was enough to get more of a game they loved. Even if they couldn’t personally control the gameplay, the TRIPITAKA video was lengthy enough to give a sense of what the experience would be like. Others were enraged: Couldn’t the collector see how important this game was? “I cannot understate just how disgusted I am that this piece of culture and art (that I am a huge fan of) isn’t being preserved and spread for the enjoyment of others,” one commenter on YouTube wrote. “Shame on you.” Undeterred by this roadblock, Bruno de Figueiredo continued his pursuit of TRIPITAKA. In 2025, his efforts bore fruit. On X, the expert on obscure Japanese games revealed that he had finally convinced the collector to share the game online after “years of appeals.” Figueiredo has since uploaded a playable ISO of the game online alongside a full three-hour playthrough of a game that had once been considered lost media. Figuerido did not respond to a request for comment. In a blog post, he emphasized the significance of this find by stating that “the importance of this footage could hardly be overstated.” He continued: I am delighted to have played a minor role in the unraveling of this thirty year old mystery, and can hardly contain my enthusiasm, as I now find myself equipped with sufficient information to produce a full post concerning a game about which I could not have written more than a sentence, just last year. Figuerido refers to TRIPITAKA as one of the rarest games ever made, and it’s true inasmuch as there appears to be only one known copy of it. Value and rarity are also fluid concepts that are ultimately determined by interested audiences. At the same time, TRIPITAKA’s fate and availability is shockingly ordinary when you consider how poorly the gaming industry preserves its own history. If the lack of care is evident with significant games that have arguable merit, it’s doubly true for average games. This is how a game with mixed reviews from twenty years ago suddenly starts commanding hundreds of dollars on resale sites; the scarcity happens because nobody felt a game was worth holding on to. “There are many extremely rare (and even lost) games for personal computers which, unlike consoles, don’t have any central control over who can publish a game, or what the minimum number of manufactured units needs to be,” says Frank Cifaldi, founder of the Video Game History foundation, a nonprofit dedicated to preserving video games. Cifaldi notes that games in the 80s and 90s in particular, some of which were self-published and never got widespread circulation to begin with, are particularly prone to the type of obscurity that can lead to only a single copy of a game. “I would further suspect that there were many games and multimedia objects from Japan during this era that are just as rare, but we don’t hear about them because of their lack of historical significance in the West,” Cifaldi says. “I would bet good money that if you surveyed the collection at the Game Preservation Society in Japan, you’d come up with dozens of ‘only known copies’ of 1980s microcomputer games.”
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Trump was supposed to lead a global right-wing populist revolution. That’s not happening.

    Is President Donald Trump leading a vanguard of right-wing populist world leaders, working together to lay waste to the liberal international order while consolidating power at home? Possibly — but based on his recent foreign policy actions, he doesn’t appear to think so. Establishment-bashing politicians around the world, from Brazil’s Jair Bolsonaro to the Philippines’ Rodrigo Duterte to the UK’s Boris Johnson, have drawn comparisons to Trump over the years. Some, notably Hungary’s Viktor Orbán and Argentina’s Javier Milei, have cultivated ties to the Trump-era American right, becoming fixtures at the Conservative Political Action Conferenceand making the rounds on US talk shows and podcasts. In Romania’s recent presidential election, the leading right-wing candidate somewhat confusingly described himself as being on the “MAGA ticket.”Trump himself has occasionally weighed in on other countries’ political debates to endorse right-wing politicians like France’s embattled far-right leader Marine Le Pen. Some of Trump’s senior officials have spoken openly of wanting to build ties with the global right. In his combative speech at the Munich Security Conference earlier this year, Vice President JD Vance described what he sees as the unfair marginalization of right-wing parties in countries like Romania and Germany as a greater threat to Europe’s security than China or Russia. Trump ally Elon Musk has been even more active in boosting far-right parties in elections around the world. But just because Trump and his officials like to see politicians and parties in their own mold win, that doesn’t mean countries led by those politicians and parties can count on any special treatment from the Trump administration. This has been especially clear in recent weeks.Just ask Israel’s Prime Minister Benjamin Netanyahu, who has spent years cultivating close ties with the US Republican Party, and with Trump in particular, and has followed a somewhat similar path in bringing previously marginalized far-right partners into the mainstream. All that has been of little use as Trump has left his Israeli supporters aghast by carrying out direct negotiations with the likes of Hamas, the Houthis, and Iran and being feted by Gulf monarchs on a Middle East tour that pointedly did not include Israel. India’s Hindu nationalist prime minister, Narendra Modi, has likewise been compared to Trump in his populist appeal, majoritarian rhetoric, and dismantling of democratic norms. Trump has cultivated a massive coterie of fans among Hindu nationalist Modi supporters as well as a close working relationship with Modi himself. But after Trump announced a ceasefire agreement in the recent flare-up of violence between India and Pakistan, Trump enraged many of his Indian supporters with remarks that appeared to take credit for pressuring India to halt its military campaign and drew equivalence between the Indian and Pakistani positions. Adding insult to injury, Trump publicly criticized Apple for plans to move the assembly of American iPhones from China to India, a move that in other administrations might have been praised as a victory for “friendshoring” — moving the production of critical goods from adversaries to allies — but doesn’t advance Trump’s goal of returning industrial manufacturing to the US. Even Orbán, star of CPAC and favorite guest of Tucker Carlson, has appeared frustrated with Trump as of late. His government has described its close economic relationship with China as a “red line,” vowing not to decouple its economy from Beijing’s, no matter what pressure Trump applies. Orbán’s simultaneous position as the most pro-Trump and most pro-China leader in Europe is looking increasingly awkward. Overall, there’s simply little evidence that political affinity guides Trump’s approach to foreign policy, a fact made abundantly clear by the “Liberation Day” tariffs the president announced in April. Taking just Latin America, for example, Argentina — led by the floppy-haired iconoclast and Musk favorite Javier Milei — and El Salvador — led by Nayib Bukele, a crypto-loving authoritarian willing to turn his country’s prisons into an American gulag — might have expected exemptions from the tariffs. But they were hit with the same tariff rates as leftist-led governments like Colombia and Brazil. Ultimately, it’s not the leaders who see eye to eye with Trump on migration, the rule of law, or wokeness who seem to have his fear. It’s the big-money monarchs of the Middle East, who can deliver the big deals and quick wins he craves. And based on the probably-at-least-partly Trump-inspired drubbing inflicted on right-wing parties in Canada and Australia in recent elections, it’s not clear that being known as the “Trump of” your country really gets you all that much. Whatever his ultimate legacy for the United States and the world, he doesn’t seem likely to be remembered as the man who made global far-right populism great again, and he doesn’t really seem all that concerned about that. See More:
    #trump #was #supposed #lead #global
    Trump was supposed to lead a global right-wing populist revolution. That’s not happening.
    Is President Donald Trump leading a vanguard of right-wing populist world leaders, working together to lay waste to the liberal international order while consolidating power at home? Possibly — but based on his recent foreign policy actions, he doesn’t appear to think so. Establishment-bashing politicians around the world, from Brazil’s Jair Bolsonaro to the Philippines’ Rodrigo Duterte to the UK’s Boris Johnson, have drawn comparisons to Trump over the years. Some, notably Hungary’s Viktor Orbán and Argentina’s Javier Milei, have cultivated ties to the Trump-era American right, becoming fixtures at the Conservative Political Action Conferenceand making the rounds on US talk shows and podcasts. In Romania’s recent presidential election, the leading right-wing candidate somewhat confusingly described himself as being on the “MAGA ticket.”Trump himself has occasionally weighed in on other countries’ political debates to endorse right-wing politicians like France’s embattled far-right leader Marine Le Pen. Some of Trump’s senior officials have spoken openly of wanting to build ties with the global right. In his combative speech at the Munich Security Conference earlier this year, Vice President JD Vance described what he sees as the unfair marginalization of right-wing parties in countries like Romania and Germany as a greater threat to Europe’s security than China or Russia. Trump ally Elon Musk has been even more active in boosting far-right parties in elections around the world. But just because Trump and his officials like to see politicians and parties in their own mold win, that doesn’t mean countries led by those politicians and parties can count on any special treatment from the Trump administration. This has been especially clear in recent weeks.Just ask Israel’s Prime Minister Benjamin Netanyahu, who has spent years cultivating close ties with the US Republican Party, and with Trump in particular, and has followed a somewhat similar path in bringing previously marginalized far-right partners into the mainstream. All that has been of little use as Trump has left his Israeli supporters aghast by carrying out direct negotiations with the likes of Hamas, the Houthis, and Iran and being feted by Gulf monarchs on a Middle East tour that pointedly did not include Israel. India’s Hindu nationalist prime minister, Narendra Modi, has likewise been compared to Trump in his populist appeal, majoritarian rhetoric, and dismantling of democratic norms. Trump has cultivated a massive coterie of fans among Hindu nationalist Modi supporters as well as a close working relationship with Modi himself. But after Trump announced a ceasefire agreement in the recent flare-up of violence between India and Pakistan, Trump enraged many of his Indian supporters with remarks that appeared to take credit for pressuring India to halt its military campaign and drew equivalence between the Indian and Pakistani positions. Adding insult to injury, Trump publicly criticized Apple for plans to move the assembly of American iPhones from China to India, a move that in other administrations might have been praised as a victory for “friendshoring” — moving the production of critical goods from adversaries to allies — but doesn’t advance Trump’s goal of returning industrial manufacturing to the US. Even Orbán, star of CPAC and favorite guest of Tucker Carlson, has appeared frustrated with Trump as of late. His government has described its close economic relationship with China as a “red line,” vowing not to decouple its economy from Beijing’s, no matter what pressure Trump applies. Orbán’s simultaneous position as the most pro-Trump and most pro-China leader in Europe is looking increasingly awkward. Overall, there’s simply little evidence that political affinity guides Trump’s approach to foreign policy, a fact made abundantly clear by the “Liberation Day” tariffs the president announced in April. Taking just Latin America, for example, Argentina — led by the floppy-haired iconoclast and Musk favorite Javier Milei — and El Salvador — led by Nayib Bukele, a crypto-loving authoritarian willing to turn his country’s prisons into an American gulag — might have expected exemptions from the tariffs. But they were hit with the same tariff rates as leftist-led governments like Colombia and Brazil. Ultimately, it’s not the leaders who see eye to eye with Trump on migration, the rule of law, or wokeness who seem to have his fear. It’s the big-money monarchs of the Middle East, who can deliver the big deals and quick wins he craves. And based on the probably-at-least-partly Trump-inspired drubbing inflicted on right-wing parties in Canada and Australia in recent elections, it’s not clear that being known as the “Trump of” your country really gets you all that much. Whatever his ultimate legacy for the United States and the world, he doesn’t seem likely to be remembered as the man who made global far-right populism great again, and he doesn’t really seem all that concerned about that. See More: #trump #was #supposed #lead #global
    WWW.VOX.COM
    Trump was supposed to lead a global right-wing populist revolution. That’s not happening.
    Is President Donald Trump leading a vanguard of right-wing populist world leaders, working together to lay waste to the liberal international order while consolidating power at home? Possibly — but based on his recent foreign policy actions, he doesn’t appear to think so. Establishment-bashing politicians around the world, from Brazil’s Jair Bolsonaro to the Philippines’ Rodrigo Duterte to the UK’s Boris Johnson, have drawn comparisons to Trump over the years. Some, notably Hungary’s Viktor Orbán and Argentina’s Javier Milei, have cultivated ties to the Trump-era American right, becoming fixtures at the Conservative Political Action Conference (CPAC) and making the rounds on US talk shows and podcasts. In Romania’s recent presidential election, the leading right-wing candidate somewhat confusingly described himself as being on the “MAGA ticket.”Trump himself has occasionally weighed in on other countries’ political debates to endorse right-wing politicians like France’s embattled far-right leader Marine Le Pen. Some of Trump’s senior officials have spoken openly of wanting to build ties with the global right. In his combative speech at the Munich Security Conference earlier this year, Vice President JD Vance described what he sees as the unfair marginalization of right-wing parties in countries like Romania and Germany as a greater threat to Europe’s security than China or Russia. Trump ally Elon Musk has been even more active in boosting far-right parties in elections around the world. But just because Trump and his officials like to see politicians and parties in their own mold win, that doesn’t mean countries led by those politicians and parties can count on any special treatment from the Trump administration. This has been especially clear in recent weeks.Just ask Israel’s Prime Minister Benjamin Netanyahu, who has spent years cultivating close ties with the US Republican Party, and with Trump in particular, and has followed a somewhat similar path in bringing previously marginalized far-right partners into the mainstream. All that has been of little use as Trump has left his Israeli supporters aghast by carrying out direct negotiations with the likes of Hamas, the Houthis, and Iran and being feted by Gulf monarchs on a Middle East tour that pointedly did not include Israel. India’s Hindu nationalist prime minister, Narendra Modi, has likewise been compared to Trump in his populist appeal, majoritarian rhetoric, and dismantling of democratic norms. Trump has cultivated a massive coterie of fans among Hindu nationalist Modi supporters as well as a close working relationship with Modi himself. But after Trump announced a ceasefire agreement in the recent flare-up of violence between India and Pakistan, Trump enraged many of his Indian supporters with remarks that appeared to take credit for pressuring India to halt its military campaign and drew equivalence between the Indian and Pakistani positions. Adding insult to injury, Trump publicly criticized Apple for plans to move the assembly of American iPhones from China to India, a move that in other administrations might have been praised as a victory for “friendshoring” — moving the production of critical goods from adversaries to allies — but doesn’t advance Trump’s goal of returning industrial manufacturing to the US. Even Orbán, star of CPAC and favorite guest of Tucker Carlson, has appeared frustrated with Trump as of late. His government has described its close economic relationship with China as a “red line,” vowing not to decouple its economy from Beijing’s, no matter what pressure Trump applies. Orbán’s simultaneous position as the most pro-Trump and most pro-China leader in Europe is looking increasingly awkward. Overall, there’s simply little evidence that political affinity guides Trump’s approach to foreign policy, a fact made abundantly clear by the “Liberation Day” tariffs the president announced in April. Taking just Latin America, for example, Argentina — led by the floppy-haired iconoclast and Musk favorite Javier Milei — and El Salvador — led by Nayib Bukele, a crypto-loving authoritarian willing to turn his country’s prisons into an American gulag — might have expected exemptions from the tariffs. But they were hit with the same tariff rates as leftist-led governments like Colombia and Brazil. Ultimately, it’s not the leaders who see eye to eye with Trump on migration, the rule of law, or wokeness who seem to have his fear. It’s the big-money monarchs of the Middle East, who can deliver the big deals and quick wins he craves. And based on the probably-at-least-partly Trump-inspired drubbing inflicted on right-wing parties in Canada and Australia in recent elections, it’s not clear that being known as the “Trump of” your country really gets you all that much. Whatever his ultimate legacy for the United States and the world, he doesn’t seem likely to be remembered as the man who made global far-right populism great again, and he doesn’t really seem all that concerned about that. See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
  • How To Take Down The Powerful Mizutsune In Monster Hunter Wilds

    BySamuel MorenoPublished5 minutes agoWe may earn a commission from links on this page.Screenshot: Capcom / Samuel Moreno / KotakuJump ToMonster Hunter Wilds’ first title update introduced a lot of new content, but a standout addition is Mizutsune. This serpentine behemoth is a one-of-a-kind threat packed with unique mechanics and an aggressive attitude. The Tempered version takes it to another level and is considered by many to be stronger than anything encountered in the campaign or after. No matter which variant you’re having trouble with, we can help you beat this frothy threat.Suggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom

    Share SubtitlesOffEnglishview videoSuggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom

    Share SubtitlesOffEnglishYou’ll first take on a Mizutsune in the “Spirit in the Moonlight” side mission. The only requirements are being at Hunter Rank 21 and having already completed the “Fishing: Life, In Microcosm” side mission for Kanya. Once you’ve hunted the monster down one time, it will start spawning in both the Scarlet Forest and Ruins of Wyveria.Tempered Mizutsune can spawn in these same areas, albeit only once you’ve leveled up more and at least reached chapter six of the main quest line. I can say from personal experience that the Tempered version spawned more often during a Fallow season or an Inclemency. Feel free to use the Rest function a couple of times to make it show up.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune’s most distinctive aspect are its bubbles, which are dispersed during many of its attacks. Getting hit by most of these will inflict you with the unique Bubbleblight ailment. This ailment is split into a minor and a major stage. Minor Bubbleblight isn’t too bad, as it’s a buff that enhances your evasion. Getting hit with another bubble will change it to the more frustrating Major Bubbleblight, which causes you to slip while running and be sent farther away by large attacks or explosions.A Nulberry is unfortunately not able to cure this status ailment, although they’re still worth bringing since Mizutsune can additionally inflict both Waterblight and Fireblight. The quickest way to cure Bubbleblight is to use a Cleanser. Alternatively, if you have fought this monster before and are just farming, equipping Mizutsune-forged armor is a big help. Its Bubbly Dance skill will prevent Major Bubbleblight so that you can focus more on dealing damage. If you don’t have the armor or run out of Cleanser, your next best bet is waiting 30 seconds for it to disappear.Some bubbles apply different effects, which makes things more complicated. Thankfully, they’re color-coded; however, you’ll still need to be quick on your feet when they’re coming right at you. Here are the different-colored bubbles and what they do:Clear: Deals damage and inflicts BubbleblightGreen: Provides healing and inflicts BubbleblightRed: Provides a temporary attack boost and inflicts BubbleblightFiery Blue: Deals damage and inflicts FireblightTrying to dodge these bubbles isn’t always the best use of your time. Both Slinger ammo and attacks from your weapon can pop them, although slower melee weapons can require precise timing. Using ranged weapons like Heavy or Light Bowguns is a lot more convenient.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune is a highly mobile monster that can hit hard and use its water-based attacks to trigger a variety of status ailments. Start your hunts with the following weaknesses in mind because you’ll want to take advantage of them:Elemental Weaknesses: Thunder, DragonWeapon Type Weaknesses: Cut, BluntBreakable Parts: Head x2, Claws x2, Tail, Dorsal FinWeak Point: MouthSusceptible Status Ailments: Blastblight, Exhaust, Paralysis, Poison, Sleep, StunScreenshot: Capcom / Samuel Moreno / KotakuEven though you can tackle this thing as early as Hunter Rank 21, I suggest holding off for a bit. Mizutsune dishes out massive damage that can be mitigated with better armor and weapons. Wait until you’ve finished chapter five and can farm Gore Magala hunts. Weapons crafted from Gore Magala parts dish out Dragon element damage, while its armor offers great Water resistance. These are huge advantages to have when fighting this creature.Any weapon type can work, but slower ones will feel extra cumbersome against Mizutsune. Between the erratic movements and seemingly endless bubbles, it’s convenient to have quicker weapons like Dual Blades or Sword and Shield. Their multi-hitting nature will help with applying elemental damage as well. I advise pinning it down using traps and different ailments if you’re still having trouble landing hits.You should also watch out for its Waterblight-inflicting jet stream attacks. Mizutsune has a handful of moves that involve vertical or horizontal sweeping water beams. Thankfully, they’re easily telegraphed and have small hitboxes. Make sure to exploit Mizutsune’s long recovery periods after using these attacks.Mizutsune’s long-reaching tail attacks are another notable characteristic. The most deadly of these are the tail slams, which come out fast and can take out all of your health if your defense is low. There is a backflip variant to hit anyone behind and another where it twists its body in the air to slam those in front. I’ve seen the latter countered with an Offset Attack, but dodging it is the less risky solution.All of the above is amplified when the monster enters its unique enraged state. Breaking its head will enable it to transition into a powered-up mode akin to Soulseer Mizutsune from prior entries, complete with blue fire flaring from its left eye. Mizutsune will start shooting fire-covered bubbles in addition to using attacks more rapidly and aggressively when this state is triggered. While you can try to avoid this by not breaking the head, the trade-off is that you’ll be inflicting less damage. The one positive to this enraged state is that it will tire quicker and eventually become exhausted. That’s your cue to start dealing as much damage as possible.Don’t feel bad if these hunts leave you frustrated. There is a lot to keep track of with little time for reaction. Tempered Mizutsune is even more challenging and might just be the toughest fight in Monster Hunter Wilds yet. Still, everything we’ve mentioned will apply all the same. Memorizing which animations initiate which attacks will go a long way. Otherwise, bring your best gear, outfit them with appropriate decorations, and carry the Armorcharm and Powercharm items for good measure.Screenshot: Capcom / Samuel Moreno / KotakuNothing makes a tough fight feel worth it more than some good loot. Flipping through the monster’s Detailed Info tabs will break down all the various drop rates for the end of a hunt, destroyed wounds, and body part carvings. It’s a lot to take in, but worth perusing to narrow down what parts you need. I’ve provided a simple list of the attainable Mizutsune materials below that’s sorted by the overall drop frequency, with the most common parts at the top.Mizutsune Fin+Mizutsune Claw+Mizutsune Purplefur+Mizutsune TailMizutsune Scale+Bubblefoam+Mizutsune Certificate SMizutsune Water Orb The community’s pleas for harder hunts certainly seem to have been heard. With the addition of Arch-Tempered monsters on the horizon, I can imagine these are only going to get tougher..
    #how #take #down #powerful #mizutsune
    How To Take Down The Powerful Mizutsune In Monster Hunter Wilds
    BySamuel MorenoPublished5 minutes agoWe may earn a commission from links on this page.Screenshot: Capcom / Samuel Moreno / KotakuJump ToMonster Hunter Wilds’ first title update introduced a lot of new content, but a standout addition is Mizutsune. This serpentine behemoth is a one-of-a-kind threat packed with unique mechanics and an aggressive attitude. The Tempered version takes it to another level and is considered by many to be stronger than anything encountered in the campaign or after. No matter which variant you’re having trouble with, we can help you beat this frothy threat.Suggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom Share SubtitlesOffEnglishview videoSuggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom Share SubtitlesOffEnglishYou’ll first take on a Mizutsune in the “Spirit in the Moonlight” side mission. The only requirements are being at Hunter Rank 21 and having already completed the “Fishing: Life, In Microcosm” side mission for Kanya. Once you’ve hunted the monster down one time, it will start spawning in both the Scarlet Forest and Ruins of Wyveria.Tempered Mizutsune can spawn in these same areas, albeit only once you’ve leveled up more and at least reached chapter six of the main quest line. I can say from personal experience that the Tempered version spawned more often during a Fallow season or an Inclemency. Feel free to use the Rest function a couple of times to make it show up.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune’s most distinctive aspect are its bubbles, which are dispersed during many of its attacks. Getting hit by most of these will inflict you with the unique Bubbleblight ailment. This ailment is split into a minor and a major stage. Minor Bubbleblight isn’t too bad, as it’s a buff that enhances your evasion. Getting hit with another bubble will change it to the more frustrating Major Bubbleblight, which causes you to slip while running and be sent farther away by large attacks or explosions.A Nulberry is unfortunately not able to cure this status ailment, although they’re still worth bringing since Mizutsune can additionally inflict both Waterblight and Fireblight. The quickest way to cure Bubbleblight is to use a Cleanser. Alternatively, if you have fought this monster before and are just farming, equipping Mizutsune-forged armor is a big help. Its Bubbly Dance skill will prevent Major Bubbleblight so that you can focus more on dealing damage. If you don’t have the armor or run out of Cleanser, your next best bet is waiting 30 seconds for it to disappear.Some bubbles apply different effects, which makes things more complicated. Thankfully, they’re color-coded; however, you’ll still need to be quick on your feet when they’re coming right at you. Here are the different-colored bubbles and what they do:Clear: Deals damage and inflicts BubbleblightGreen: Provides healing and inflicts BubbleblightRed: Provides a temporary attack boost and inflicts BubbleblightFiery Blue: Deals damage and inflicts FireblightTrying to dodge these bubbles isn’t always the best use of your time. Both Slinger ammo and attacks from your weapon can pop them, although slower melee weapons can require precise timing. Using ranged weapons like Heavy or Light Bowguns is a lot more convenient.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune is a highly mobile monster that can hit hard and use its water-based attacks to trigger a variety of status ailments. Start your hunts with the following weaknesses in mind because you’ll want to take advantage of them:Elemental Weaknesses: Thunder, DragonWeapon Type Weaknesses: Cut, BluntBreakable Parts: Head x2, Claws x2, Tail, Dorsal FinWeak Point: MouthSusceptible Status Ailments: Blastblight, Exhaust, Paralysis, Poison, Sleep, StunScreenshot: Capcom / Samuel Moreno / KotakuEven though you can tackle this thing as early as Hunter Rank 21, I suggest holding off for a bit. Mizutsune dishes out massive damage that can be mitigated with better armor and weapons. Wait until you’ve finished chapter five and can farm Gore Magala hunts. Weapons crafted from Gore Magala parts dish out Dragon element damage, while its armor offers great Water resistance. These are huge advantages to have when fighting this creature.Any weapon type can work, but slower ones will feel extra cumbersome against Mizutsune. Between the erratic movements and seemingly endless bubbles, it’s convenient to have quicker weapons like Dual Blades or Sword and Shield. Their multi-hitting nature will help with applying elemental damage as well. I advise pinning it down using traps and different ailments if you’re still having trouble landing hits.You should also watch out for its Waterblight-inflicting jet stream attacks. Mizutsune has a handful of moves that involve vertical or horizontal sweeping water beams. Thankfully, they’re easily telegraphed and have small hitboxes. Make sure to exploit Mizutsune’s long recovery periods after using these attacks.Mizutsune’s long-reaching tail attacks are another notable characteristic. The most deadly of these are the tail slams, which come out fast and can take out all of your health if your defense is low. There is a backflip variant to hit anyone behind and another where it twists its body in the air to slam those in front. I’ve seen the latter countered with an Offset Attack, but dodging it is the less risky solution.All of the above is amplified when the monster enters its unique enraged state. Breaking its head will enable it to transition into a powered-up mode akin to Soulseer Mizutsune from prior entries, complete with blue fire flaring from its left eye. Mizutsune will start shooting fire-covered bubbles in addition to using attacks more rapidly and aggressively when this state is triggered. While you can try to avoid this by not breaking the head, the trade-off is that you’ll be inflicting less damage. The one positive to this enraged state is that it will tire quicker and eventually become exhausted. That’s your cue to start dealing as much damage as possible.Don’t feel bad if these hunts leave you frustrated. There is a lot to keep track of with little time for reaction. Tempered Mizutsune is even more challenging and might just be the toughest fight in Monster Hunter Wilds yet. Still, everything we’ve mentioned will apply all the same. Memorizing which animations initiate which attacks will go a long way. Otherwise, bring your best gear, outfit them with appropriate decorations, and carry the Armorcharm and Powercharm items for good measure.Screenshot: Capcom / Samuel Moreno / KotakuNothing makes a tough fight feel worth it more than some good loot. Flipping through the monster’s Detailed Info tabs will break down all the various drop rates for the end of a hunt, destroyed wounds, and body part carvings. It’s a lot to take in, but worth perusing to narrow down what parts you need. I’ve provided a simple list of the attainable Mizutsune materials below that’s sorted by the overall drop frequency, with the most common parts at the top.Mizutsune Fin+Mizutsune Claw+Mizutsune Purplefur+Mizutsune TailMizutsune Scale+Bubblefoam+Mizutsune Certificate SMizutsune Water Orb The community’s pleas for harder hunts certainly seem to have been heard. With the addition of Arch-Tempered monsters on the horizon, I can imagine these are only going to get tougher.. #how #take #down #powerful #mizutsune
    KOTAKU.COM
    How To Take Down The Powerful Mizutsune In Monster Hunter Wilds
    BySamuel MorenoPublished5 minutes agoWe may earn a commission from links on this page.Screenshot: Capcom / Samuel Moreno / KotakuJump ToMonster Hunter Wilds’ first title update introduced a lot of new content, but a standout addition is Mizutsune. This serpentine behemoth is a one-of-a-kind threat packed with unique mechanics and an aggressive attitude. The Tempered version takes it to another level and is considered by many to be stronger than anything encountered in the campaign or after. No matter which variant you’re having trouble with, we can help you beat this frothy threat.Suggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom Share SubtitlesOffEnglishview videoSuggested ReadingThe 3 Best And 3 Worst Korok Challenges In Tears Of The Kingdom Share SubtitlesOffEnglishYou’ll first take on a Mizutsune in the “Spirit in the Moonlight” side mission. The only requirements are being at Hunter Rank 21 and having already completed the “Fishing: Life, In Microcosm” side mission for Kanya. Once you’ve hunted the monster down one time, it will start spawning in both the Scarlet Forest and Ruins of Wyveria.Tempered Mizutsune can spawn in these same areas, albeit only once you’ve leveled up more and at least reached chapter six of the main quest line. I can say from personal experience that the Tempered version spawned more often during a Fallow season or an Inclemency. Feel free to use the Rest function a couple of times to make it show up.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune’s most distinctive aspect are its bubbles, which are dispersed during many of its attacks. Getting hit by most of these will inflict you with the unique Bubbleblight ailment. This ailment is split into a minor and a major stage. Minor Bubbleblight isn’t too bad, as it’s a buff that enhances your evasion. Getting hit with another bubble will change it to the more frustrating Major Bubbleblight, which causes you to slip while running and be sent farther away by large attacks or explosions.A Nulberry is unfortunately not able to cure this status ailment, although they’re still worth bringing since Mizutsune can additionally inflict both Waterblight and Fireblight. The quickest way to cure Bubbleblight is to use a Cleanser. Alternatively, if you have fought this monster before and are just farming, equipping Mizutsune-forged armor is a big help. Its Bubbly Dance skill will prevent Major Bubbleblight so that you can focus more on dealing damage. If you don’t have the armor or run out of Cleanser, your next best bet is waiting 30 seconds for it to disappear.Some bubbles apply different effects, which makes things more complicated. Thankfully, they’re color-coded; however, you’ll still need to be quick on your feet when they’re coming right at you. Here are the different-colored bubbles and what they do:Clear: Deals damage and inflicts BubbleblightGreen: Provides healing and inflicts BubbleblightRed: Provides a temporary attack boost and inflicts BubbleblightFiery Blue: Deals damage and inflicts FireblightTrying to dodge these bubbles isn’t always the best use of your time. Both Slinger ammo and attacks from your weapon can pop them, although slower melee weapons can require precise timing. Using ranged weapons like Heavy or Light Bowguns is a lot more convenient.Screenshot: Capcom / Samuel Moreno / KotakuMizutsune is a highly mobile monster that can hit hard and use its water-based attacks to trigger a variety of status ailments. Start your hunts with the following weaknesses in mind because you’ll want to take advantage of them:Elemental Weaknesses: Thunder, DragonWeapon Type Weaknesses: Cut, BluntBreakable Parts: Head x2, Claws x2, Tail (can also be severed), Dorsal FinWeak Point: MouthSusceptible Status Ailments: Blastblight, Exhaust, Paralysis, Poison, Sleep, StunScreenshot: Capcom / Samuel Moreno / KotakuEven though you can tackle this thing as early as Hunter Rank 21, I suggest holding off for a bit. Mizutsune dishes out massive damage that can be mitigated with better armor and weapons. Wait until you’ve finished chapter five and can farm Gore Magala hunts. Weapons crafted from Gore Magala parts dish out Dragon element damage, while its armor offers great Water resistance. These are huge advantages to have when fighting this creature.Any weapon type can work, but slower ones will feel extra cumbersome against Mizutsune. Between the erratic movements and seemingly endless bubbles, it’s convenient to have quicker weapons like Dual Blades or Sword and Shield. Their multi-hitting nature will help with applying elemental damage as well. I advise pinning it down using traps and different ailments if you’re still having trouble landing hits.You should also watch out for its Waterblight-inflicting jet stream attacks. Mizutsune has a handful of moves that involve vertical or horizontal sweeping water beams. Thankfully, they’re easily telegraphed and have small hitboxes. Make sure to exploit Mizutsune’s long recovery periods after using these attacks.Mizutsune’s long-reaching tail attacks are another notable characteristic. The most deadly of these are the tail slams, which come out fast and can take out all of your health if your defense is low. There is a backflip variant to hit anyone behind and another where it twists its body in the air to slam those in front. I’ve seen the latter countered with an Offset Attack, but dodging it is the less risky solution.All of the above is amplified when the monster enters its unique enraged state. Breaking its head will enable it to transition into a powered-up mode akin to Soulseer Mizutsune from prior entries, complete with blue fire flaring from its left eye. Mizutsune will start shooting fire-covered bubbles in addition to using attacks more rapidly and aggressively when this state is triggered. While you can try to avoid this by not breaking the head, the trade-off is that you’ll be inflicting less damage. The one positive to this enraged state is that it will tire quicker and eventually become exhausted. That’s your cue to start dealing as much damage as possible.Don’t feel bad if these hunts leave you frustrated. There is a lot to keep track of with little time for reaction. Tempered Mizutsune is even more challenging and might just be the toughest fight in Monster Hunter Wilds yet. Still, everything we’ve mentioned will apply all the same. Memorizing which animations initiate which attacks will go a long way. Otherwise, bring your best gear, outfit them with appropriate decorations, and carry the Armorcharm and Powercharm items for good measure.Screenshot: Capcom / Samuel Moreno / KotakuNothing makes a tough fight feel worth it more than some good loot. Flipping through the monster’s Detailed Info tabs will break down all the various drop rates for the end of a hunt, destroyed wounds, and body part carvings. It’s a lot to take in, but worth perusing to narrow down what parts you need. I’ve provided a simple list of the attainable Mizutsune materials below that’s sorted by the overall drop frequency, with the most common parts at the top.Mizutsune Fin+ (100% chance for breaking the Head or Dorsal Fin)Mizutsune Claw+ (100% chance for breaking either Claw)Mizutsune Purplefur+ (100% chance for breaking the Tail)Mizutsune TailMizutsune Scale+Bubblefoam+Mizutsune Certificate SMizutsune Water Orb The community’s pleas for harder hunts certainly seem to have been heard. With the addition of Arch-Tempered monsters on the horizon, I can imagine these are only going to get tougher..
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Inside the story that enraged OpenAI

    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review

    I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said.

    At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely.

    Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas.

    But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform.

    Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government. 

    So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company.

    Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.

    I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else?

    Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said.

    He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?”

    On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer.

    Why did we need AGI to do that instead of AI? I asked.

    This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI.

    And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on mosttasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it.

    AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care.

    Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.”

    This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us.

    “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.”

    That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival.

    “I actually think that’s a very beautiful thing,” he said.

    In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born.

    “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.”

    His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started.

    Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one.

    Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said.

    I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models.

    That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples.

    “It is unquestioningly very highly desirable that data centers be as green as possible,” he added.

    “No question,” Brockman quipped.

    “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue.

    “It’s 2 percent globally,” I offered.

    “Isn’t Bitcoin like 1 percent?” Brockman said.

    “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative.

    Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.”

    I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.”

    “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.”

    “The day we announced the deal,” he said, referring to Microsoft’s new billion investment, “Microsoft’s market cap went up by billion. People believe there is a positive ROI even just on short‑term technology.”

    OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.”

    Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step.

    He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself.

    There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees.

    This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public?

    At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said.

    In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future.

    “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.

    What motivated him? I asked Brockman.

    What are the chances that a transformative technology could arrive in your lifetime? he countered.

    He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said.

    Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him.

    A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point.

    In 2022, he became OpenAI’s president.

    During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said.

    OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.

    Brockman pointed once again to the billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone.

    Was there a historical example of a technology’s benefits that had been successfully distributed? I asked.

    “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative.

    “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards.

    “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly.

    “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.”

    His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”

    It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else.

    He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.

    “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.”

    In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

    Hours later, Elon Musk replied to the story with three tweets in rapid succession:

    “OpenAI should be more open imo”

    “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research.

    “All orgs developing advanced AI should be regulated, including Tesla”

    Afterward, Altman sent OpenAI employees an email.

    “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.”

    It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models.

    “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team.”

    OpenAI wouldn’t speak to me again for three years.

    From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao.
    #inside #story #that #enraged #openai
    Inside the story that enraged OpenAI
    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said. At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely. Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas. But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform. Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.  So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company. Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof. I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else? Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said. He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?” On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer. Why did we need AGI to do that instead of AI? I asked. This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI. And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on mosttasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it. AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care. Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.” This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us. “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.” That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival. “I actually think that’s a very beautiful thing,” he said. In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born. “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.” His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started. Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one. Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said. I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models. That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples. “It is unquestioningly very highly desirable that data centers be as green as possible,” he added. “No question,” Brockman quipped. “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue. “It’s 2 percent globally,” I offered. “Isn’t Bitcoin like 1 percent?” Brockman said. “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative. Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.” I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.” “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.” “The day we announced the deal,” he said, referring to Microsoft’s new billion investment, “Microsoft’s market cap went up by billion. People believe there is a positive ROI even just on short‑term technology.” OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.” Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step. He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself. There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees. This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public? At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said. In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future. “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me. What motivated him? I asked Brockman. What are the chances that a transformative technology could arrive in your lifetime? he countered. He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said. Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him. A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point. In 2022, he became OpenAI’s president. During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said. OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations. Brockman pointed once again to the billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone. Was there a historical example of a technology’s benefits that had been successfully distributed? I asked. “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative. “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards. “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly. “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.” His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.” It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else. He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said. “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.” In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.” Hours later, Elon Musk replied to the story with three tweets in rapid succession: “OpenAI should be more open imo” “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research. “All orgs developing advanced AI should be regulated, including Tesla” Afterward, Altman sent OpenAI employees an email. “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.” It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models. “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team.” OpenAI wouldn’t speak to me again for three years. From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao. #inside #story #that #enraged #openai
    WWW.TECHNOLOGYREVIEW.COM
    Inside the story that enraged OpenAI
    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me on writing a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission. The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. — Niall Firth, executive editor, MIT Technology Review I arrived at OpenAI’s offices on August 7, 2019. Greg Brockman, then thirty‑one, OpenAI’s chief technology officer and soon‑to‑be company president, came down the staircase to greet me. He shook my hand with a tentative smile. “We’ve never given someone so much access before,” he said. At the time, few people beyond the insular world of AI research knew about OpenAI. But as a reporter at MIT Technology Review covering the ever‑expanding boundaries of artificial intelligence, I had been following its movements closely. Until that year, OpenAI had been something of a stepchild in AI research. It had an outlandish premise that AGI could be attained within a decade, when most non‑OpenAI experts doubted it could be attained at all. To much of the field, it had an obscene amount of funding despite little direction and spent too much of the money on marketing what other researchers frequently snubbed as unoriginal research. It was, for some, also an object of envy. As a nonprofit, it had said that it had no intention to chase commercialization. It was a rare intellectual playground without strings attached, a haven for fringe ideas. But in the six months leading up to my visit, the rapid slew of changes at OpenAI signaled a major shift in its trajectory. First was its confusing decision to withhold GPT‑2 and brag about it. Then its announcement that Sam Altman, who had mysteriously departed his influential perch at YC, would step in as OpenAI’s CEO with the creation of its new “capped‑profit” structure. I had already made my arrangements to visit the office when it subsequently revealed its deal with Microsoft, which gave the tech giant priority for commercializing OpenAI’s technologies and locked it into exclusively using Azure, Microsoft’s cloud‑computing platform. Each new announcement garnered fresh controversy, intense speculation, and growing attention, beginning to reach beyond the confines of the tech industry. As my colleagues and I covered the company’s progression, it was hard to grasp the full weight of what was happening. What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for‑profit business would have ripple effects across its spheres of influence in industry and government.  So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I interest them in a profile? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed ready to reintroduce itself to the public. I would have three days to interview leadership and embed inside the company. Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. Sitting side by side at a long conference table, they each played their part. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression; Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof. I opened my laptop and scrolled through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else? Brockman nodded vigorously. He was used to defending OpenAI’s position. “The reason that we care so much about AGI and that we think it’s important to build is because we think it can help solve complex problems that are just out of reach of humans,” he said. He offered two examples that had become dogma among AGI believers. Climate change. “It’s a super‑complex problem. How are you even supposed to solve it?” And medicine. “Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost?” On the latter, he began to recount the story of a friend who had a rare disorder and had recently gone through the exhausting rigmarole of bouncing between different specialists to figure out his problem. AGI would bring together all of these specialties. People like his friend would no longer spend so much energy and frustration on getting an answer. Why did we need AGI to do that instead of AI? I asked. This was an important distinction. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI. And as OpenAI defined it, AGI referred to a theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind to match or exceed its performance on most (economically valuable) tasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. There had yet to be definitive evidence that this was possible, which didn’t even touch on the normative discussion of whether people should develop it. AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. Those capabilities—rooted in powerful pattern matching known as machine learning—had already demonstrated exciting applications in climate change mitigation and health care. Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems.” AGI would be different, he said. “Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics; they all communicate results between them extremely fast.” This seemed to me like another way of saying that the goal of AGI was to replace humans. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us. “No,” Brockman replied quickly. “This is one thing that’s really important. What is the purpose of technology? Why is it here? Why do we build it? We’ve been building technologies for thousands of years now, right? We do it because they serve people. AGI is not going to be different—not the way that we envision it, not the way we want to build it, not the way we think it should play out.” That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it succeeded, it would decouple the need to work from survival. “I actually think that’s a very beautiful thing,” he said. In our meeting with Sutskever, Brockman reminded me of the bigger picture. “What we view our role as is not actually being a determiner of whether AGI gets built,” he said. This was a favorite argument in Silicon Valley—the inevitability card. If we don’t do it, somebody else will. “The trajectory is already there,” he emphasized, “but the thing we can influence is the initial conditions under which it’s born. “What is OpenAI?” he continued. “What is our purpose? What are we really trying to do? Our mission is to ensure that AGI benefits all of humanity. And the way we want to do that is: Build AGI and distribute its economic benefits.” His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started. Our conversation continued on in circles until we ran out the clock after forty‑five minutes. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different approach, asking them instead to give examples of the downsides of the technology. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one. Brockman attempted an answer: deepfakes. “It’s not clear the world is better through its applications,” he said. I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? A recent study from the University of Massachusetts Amherst had placed alarming numbers on the huge and growing carbon emissions of training larger and larger AI models. That was “undeniable,” Sutskever said, but the payoff was worth it because AGI would, “among other things, counteract the environmental cost specifically.” He stopped short of offering examples. “It is unquestioningly very highly desirable that data centers be as green as possible,” he added. “No question,” Brockman quipped. “Data centers are the biggest consumer of energy, of electricity,” Sutskever continued, seeming intent now on proving that he was aware of and cared about this issue. “It’s 2 percent globally,” I offered. “Isn’t Bitcoin like 1 percent?” Brockman said. “Wow!” Sutskever said, in a sudden burst of emotion that felt, at this point, forty minutes into the conversation, somewhat performative. Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.” AGI—and thus the data centers needed to support them—would be “too useful to not exist.” I tried again to press for more details. “What you’re saying is OpenAI is making a huge gamble that you will successfully reach beneficial AGI to counteract global warming before the act of doing so might exacerbate it.” “I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in. “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.” “The day we announced the deal,” he said, referring to Microsoft’s new $1 billion investment, “Microsoft’s market cap went up by $10 billion. People believe there is a positive ROI even just on short‑term technology.” OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress. “That’s the standard we should really hold ourselves to. We should continue to make that progress. That’s how we know we’re on track.” Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. But as researchers and engineers, their task was to keep pushing forward, to unearth the shape of the technology step by step. He spoke like Michelangelo, as though AGI already existed within the marble he was carving. All he had to do was chip away until it revealed itself. There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. Brockman would be my chaperone. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees. This would become a recurring theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to check that they hadn’t violated some disclosure policy. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. It was odd behavior in general, made odder by OpenAI’s commitment to transparency. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public? At lunch and through the following days, I probed deeper into why Brockman had cofounded OpenAI. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The name of its first section, “The Imitation Game,” which inspired the title of the 2014 Hollywood dramatization of Turing’s life, begins with the opening provocation, “Can machines think?” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a classic origin story among people working in AI. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said. In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. He wrote down in his notes that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used for research stood in the aisle bearing the rings, like a sentinel from a post-apocalyptic future. “Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me. What motivated him? I asked Brockman. What are the chances that a transformative technology could arrive in your lifetime? he countered. He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said. Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he bristled with the anxious energy of someone who wanted history‑defining recognition. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him. A year before we spoke, he had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe with a twinge of self‑pity that chief technology officers were never known. Name a famous CTO, he challenged the crowd. They struggled to do so. He had proved his point. In 2022, he became OpenAI’s president. During our conversations, Brockman insisted to me that none of OpenAI’s structural changes signaled a shift in its core mission. In fact, the capped profit and the new crop of funders enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said. OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. This was imperative, Brockman stressed. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full implications of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations. Brockman pointed once again to the $10 billion jump in Microsoft’s market cap. “What that really reflects is AI is delivering real value to the real world today,” he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone. Was there a historical example of a technology’s benefits that had been successfully distributed? I asked. “Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said, fumbling a bit before settling on his answer. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative. “Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards. “Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly. “I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.” His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.” It was a nice analogy. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. Perhaps through distributing universal basic income, he wondered aloud, perhaps through something else. He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said. “The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to sign up for. That’s not a world that I want to help build.” In February 2020, I published my profile for MIT Technology Review, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents. “There is a misalignment between what the company publicly espouses and how it operates behind closed doors,” I wrote. “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.” Hours later, Elon Musk replied to the story with three tweets in rapid succession: “OpenAI should be more open imo” “I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research. “All orgs developing advanced AI should be regulated, including Tesla” Afterward, Altman sent OpenAI employees an email. “I wanted to share some thoughts about the Tech Review article,” he wrote. “While definitely not catastrophic, it was clearly bad.” It was “a fair criticism,” he said that the piece had identified a disconnect between the perception of OpenAI and its reality. This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging. “It’s good, not bad, that we have figured out how to be flexible and adapt,” he said, including restructuring the organization and heightening confidentiality, “in order to achieve our mission as we learn more.” OpenAI should ignore my article for now and, in a few weeks’ time, start underscoring its continued commitment to its original principles under the new transformation. “This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing,” he added, referring to an application programming interface for delivering OpenAI’s models. “The most serious issue of all, to me,” he continued, “is that someone leaked our internal documents.” They had already opened an investigation and would keep the company updated. He would also suggest that Amodei and Musk meet to work out Musk’s criticism, which was “mild relative to other things he’s said” but still “a bad thing to do.” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team (but not give the press the public fight they’d love right now).” OpenAI wouldn’t speak to me again for three years. From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 by Karen Hao.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • The Download: chaos at OpenAI, and the spa heated by bitcoin mining

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    Inside the story that enraged OpenAI

    —Niall Firth, executive editor, MIT Technology Review

    In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission.The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting.

    This spa’s water is heated by bitcoin mining

    At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

    When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. Read the full story.

    —Carrie Klein

    This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Nvidia wants to build an AI supercomputer in Taiwan As Trump’s tariffs upend existing supply chains.+ Jensen Huang has denied that Nvidia’s chips are being diverted into China.2 xAI’s Grok dabbled in Holocaust denialThe chatbot said it was “skeptical” about points that historians agree are facts.+ It blamed the comments on a programming error.3 Apple is planning to overhaul Siri entirelyTo make it an assistant fit for the AI age.4 Dentists are worried by RFK Jr’s fluoride banParticularly in rural America.+ Florida has become the second state to ban fluoride in public water.5 Fewer people want to work in America’s factoriesThat’s a problem when Trump is so hell-bent on kickstarting the manufacturing industry.+ Sweeping tariffs could threaten the US manufacturing rebound.6 Meet the crypto investors hoping to bend the President’s earThey’re treating Trump’s meme coin dinner as an opportunity to push their agendas.+ Many of them are offloading their coins, too.+ Crypto bigwigs are targets for criminals.+ Bodyguards and other forms of security are becoming de rigueur.7 How the US reversed the overdose epidemicNaloxone is a major factor.+ How the federal government is tracking changes in the supply of street drugs.8 Chatbots really love the heads of the companies that made them And are not so fond of the leaders of its rivals.+ What if we could just ask AI to be less biased?9 Technology is a double-edged sword What connects us can simultaneously outrage us.10 Meet the people hooked on watching nature live streamsThey find checking in with animals puts their own troubles in perspective.Quote of the day

    “People are just scared. They don’t know where they fit in this new world.”

    —Angela Jiang, who is working on a startup exploring the impact of AI on the labor market, tells the Wall Street Journal about the woes of tech job seekers trying to land new jobs in the current economy.

    One more thing

    How the Rubin Observatory will help us understand dark matter and dark energyWe can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature.Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how.

    —Jenna Ahart

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ Archaeologists in Canada are facing a mighty challenge—to solve how thousands of dinosaurs died in what’s now a forest in Alberta.+ Before Brian Johnson joined AC/DC, he sang on this very distinctive hooverad.+ Wealthy Londoners are adding spas to their gardens, because why not.+ I must eat the crystal breakfast!
    #download #chaos #openai #spa #heated
    The Download: chaos at OpenAI, and the spa heated by bitcoin mining
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Inside the story that enraged OpenAI —Niall Firth, executive editor, MIT Technology Review In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission.The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. This spa’s water is heated by bitcoin mining At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers.  When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. Read the full story. —Carrie Klein This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Nvidia wants to build an AI supercomputer in Taiwan As Trump’s tariffs upend existing supply chains.+ Jensen Huang has denied that Nvidia’s chips are being diverted into China.2 xAI’s Grok dabbled in Holocaust denialThe chatbot said it was “skeptical” about points that historians agree are facts.+ It blamed the comments on a programming error.3 Apple is planning to overhaul Siri entirelyTo make it an assistant fit for the AI age.4 Dentists are worried by RFK Jr’s fluoride banParticularly in rural America.+ Florida has become the second state to ban fluoride in public water.5 Fewer people want to work in America’s factoriesThat’s a problem when Trump is so hell-bent on kickstarting the manufacturing industry.+ Sweeping tariffs could threaten the US manufacturing rebound.6 Meet the crypto investors hoping to bend the President’s earThey’re treating Trump’s meme coin dinner as an opportunity to push their agendas.+ Many of them are offloading their coins, too.+ Crypto bigwigs are targets for criminals.+ Bodyguards and other forms of security are becoming de rigueur.7 How the US reversed the overdose epidemicNaloxone is a major factor.+ How the federal government is tracking changes in the supply of street drugs.8 Chatbots really love the heads of the companies that made them And are not so fond of the leaders of its rivals.+ What if we could just ask AI to be less biased?9 Technology is a double-edged sword What connects us can simultaneously outrage us.10 Meet the people hooked on watching nature live streamsThey find checking in with animals puts their own troubles in perspective.Quote of the day “People are just scared. They don’t know where they fit in this new world.” —Angela Jiang, who is working on a startup exploring the impact of AI on the labor market, tells the Wall Street Journal about the woes of tech job seekers trying to land new jobs in the current economy. One more thing How the Rubin Observatory will help us understand dark matter and dark energyWe can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature.Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how. —Jenna Ahart We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Archaeologists in Canada are facing a mighty challenge—to solve how thousands of dinosaurs died in what’s now a forest in Alberta.+ Before Brian Johnson joined AC/DC, he sang on this very distinctive hooverad.+ Wealthy Londoners are adding spas to their gardens, because why not.+ I must eat the crystal breakfast! #download #chaos #openai #spa #heated
    WWW.TECHNOLOGYREVIEW.COM
    The Download: chaos at OpenAI, and the spa heated by bitcoin mining
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Inside the story that enraged OpenAI —Niall Firth, executive editor, MIT Technology Review In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched me a story about a then little-known company, OpenAI. It was her biggest assignment to date. Hao’s feat of reporting took a series of twists and turns over the coming months, eventually revealing how OpenAI’s ambition had taken it far afield from its original mission.The finished story was a prescient look at a company at a tipping point—or already past it. And OpenAI was not happy with the result. Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is an in-depth exploration of the company that kick-started the AI arms race, and what that race means for all of us. This excerpt is the origin story of that reporting. This spa’s water is heated by bitcoin mining At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers.  When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. Read the full story. —Carrie Klein This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Nvidia wants to build an AI supercomputer in Taiwan As Trump’s tariffs upend existing supply chains. (WSJ $)+ Jensen Huang has denied that Nvidia’s chips are being diverted into China. (Bloomberg $) 2 xAI’s Grok dabbled in Holocaust denialThe chatbot said it was “skeptical” about points that historians agree are facts. (Rolling Stone $)+ It blamed the comments on a programming error. (The Guardian) 3 Apple is planning to overhaul Siri entirelyTo make it an assistant fit for the AI age. (Bloomberg $) 4 Dentists are worried by RFK Jr’s fluoride banParticularly in rural America. (Ars Technica)+ Florida has become the second state to ban fluoride in public water. (NBC News) 5 Fewer people want to work in America’s factoriesThat’s a problem when Trump is so hell-bent on kickstarting the manufacturing industry. (WSJ $)+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review) 6 Meet the crypto investors hoping to bend the President’s earThey’re treating Trump’s meme coin dinner as an opportunity to push their agendas. (WP $)+ Many of them are offloading their coins, too. (Wired $)+ Crypto bigwigs are targets for criminals. (WSJ $)+ Bodyguards and other forms of security are becoming de rigueur. (Bloomberg $) 7 How the US reversed the overdose epidemicNaloxone is a major factor. (Vox)+ How the federal government is tracking changes in the supply of street drugs. (MIT Technology Review) 8 Chatbots really love the heads of the companies that made them And are not so fond of the leaders of its rivals. (FT $)+ What if we could just ask AI to be less biased? (MIT Technology Review) 9 Technology is a double-edged sword What connects us can simultaneously outrage us. (The Atlantic $) 10 Meet the people hooked on watching nature live streamsThey find checking in with animals puts their own troubles in perspective. (The Guardian) Quote of the day “People are just scared. They don’t know where they fit in this new world.” —Angela Jiang, who is working on a startup exploring the impact of AI on the labor market, tells the Wall Street Journal about the woes of tech job seekers trying to land new jobs in the current economy. One more thing How the Rubin Observatory will help us understand dark matter and dark energyWe can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature.Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how. —Jenna Ahart We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Archaeologists in Canada are facing a mighty challenge—to solve how thousands of dinosaurs died in what’s now a forest in Alberta.+ Before Brian Johnson joined AC/DC, he sang on this very distinctive hoover (vacuum cleaner) ad.+ Wealthy Londoners are adding spas to their gardens, because why not.+ I must eat the crystal breakfast!
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Elon Musk’s apparent power play at the Copyright Office completely backfired
    What initially appeared to be a power play by Elon Musk and the Department of Government Efficiency (DOGE) to take over the US Copyright Office by having Donald Trump remove the officials in charge has now backfired in spectacular fashion, as Trump’s acting replacements are known to be unfriendly — and even downright hostile — to the tech industry.
    When Trump fired Librarian of Congress Carla Hayden last week and Register of Copyrights Shira Perlmutter over the weekend, it was seen as another move driven by the tech wing of the Republican Party — especially in light of the Copyright Office releasing a pre-publication report saying some kinds of generative AI training would not be considered fair use.
    And when two men showed up at the Copyright Office inside the Library of Congress carrying letters purporting to appoint them to acting leadership positions, the DOGE takeover appeared to be complete.But those two men, Paul Perkins and Brian Nieves, were not DOGE at all, but instead approved by the MAGA wing of the Trump coalition that aims to put tech companies in check.
    Perkins, now the supposed acting Register of Copyrights, is an eight-year veteran of the DOJ who served in the first Trump administration prosecuting fraud cases.
    Nieves, the putative acting deputy librarian, is currently at the Office of the Deputy Attorney General, having previously been a lawyer on the House Judiciary Committee, where he worked with Rep.
    Jim Jordan on Big Tech investigations.
    And Todd Blanche, the putative Acting Librarian of Congress who would be their boss, is a staunch Trump ally who represented him during his 2024 Manhattan criminal trial, and is now the Deputy Attorney General overseeing the DOJ’s side in the Google Search remedies case.
    As one government affairs lobbyist told The Verge, Blanche is “there to stick it to tech.”The appointments of Blanche, Perkins, and Nieves are the result of furious lobbying over the weekend by the conservative content industry — as jealously protective of its copyrighted works as any other media companies — as well as populist Republican lawmakers and lawyers, all enraged that Silicon Valley had somehow persuaded Trump to fire someone who’d recently criticized AI companies.Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and David SacksThe populists were particularly rankled over Perlmutter’s removal from the helm of the Copyright Office, which happened the day after the agency released a pre-publication version of its report on the use of copyrighted material in training generative AI systems.
    Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and “White House A.I.
    & Crypto Czar” David Sacks, meant to eliminate any resistance to AI companies using copyrighted material to train models without having to pay for it.“You can say, well, we have to compete with China.
    No, we don’t have to steal content to compete with China.
    We don’t have slave labor to compete with China.
    It’s a bullshit argument,” Mike Davis, the president of the Article III project and a key antitrust advisor to Trump, told The Verge.
    “It’s not fair use under the copyright laws to take everyone’s content and have the big tech platforms monetize it.
    That’s the opposite of fair use.
    That’s a copyright infringement.”It’s the rare time that MAGA world is in agreement with the Democratic Party, which has roundly condemned the firings of Hayden and Perlmutter, and also zeroed in on the Musk-Sacks faction as the instigator.In a press release, Rep.
    Joe Morelle (D-NY) characterized the hundred-plus-page report, the third installment of a series that the office has put out on copyright and artificial intelligence, as “refus[ing] to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.” Meanwhile, Sen.
    Ron Wyden (D-OR), who told The Verge in an emailed statement that the president had no power to fire either Hayden or Perlmutter, said, “This all looks like another way to pay back Elon Musk and the other AI billionaires who backed Trump’s campaign.”The agency’s interpretation of what is or isn’t fair use does not have binding force on the courtsPublications like the AI report essentially lay out how the Copyright Office interprets copyright law.
    But the agency’s interpretation of what is or isn’t fair use does not have binding force on the courts, so a report like this one functions mostly as expert commentary and reference material.
    However, the entire AI industry is built on an expansive interpretation of copyright law that’s currently being tested in the courts — a situation that’s created dire need for exactly this sort of expert commentary.
    The AI report applies the law of fair use to different kinds of AI training and usage, concluding that although outcomes might differ case by case, “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” But far from advising drastic action in response to what the Office believes is rampant copyright infringement, the report instead states that “government intervention would be premature at this time,” given that licensing agreements are being made across various sectors.“Now tech bros are going to steal creators’ copyrights for AP profits”The unoffending nature of the report made Perlmutter’s removal all the more alarming to the MAGA ideologues in Trump’s inner circle, who saw this as a clear power grab, and were immediately vocal about it.
    “Now tech bros are going to steal creators’ copyrights for AP profits,” Davis posted immediately on Truth Social, along with a link to a CBS story about Perlmutter’s firing.
    “This is 100% unacceptable.” Curiously, just after Davis published the post, Trump reposted it, link and all.
    None of Trump’s purported appointees have a particularly relevant background for their new jobs — but they are certainly not DOGE people and, generally speaking, are not the kind of people that generative AI proponents would want in the office.
    And for now, this counts as a political win for the anti-tech populists, even if nothing further happens.
    “Sometimes when you make a pitch to leadership to get rid of someone, the person who comes in after isn’t any better,” said a source familiar with the dynamic between the White House and both sides of the copyright issue.
    “You don’t necessarily get to name the successor and fire someone, and so in many cases, I’ve seen people get pushed out the door and the replacement is even worse.”RelatedThe speed of the firings and subsequent power struggle, however, have underscored the brewing constitutional crisis sparked by Trump’s frequent firing of independent agency officials confirmed by Congress.
    The Library of Congress firings, in particular, reach well past the theory of executive power claimed by the White House and into even murkier territory.
    It’s legally dubious whether the Librarian of Congress can be removed by the president, as the Library, a legislative branch agency that significantly predates the administrative state, does not fit neatly into the modern-day legal framework of federal agencies.
    (Of course, everything about the law is in upheaval even where agencies do fit the framework.) Regardless, the law clearly states that the Librarian of Congress — not the president — appoints the Register of Copyrights.At the moment, the Library of Congress has not received any direction from Congress on how to move forward.
    The constitutional crisis — one of many across the federal government — remains ongoing.
    Elon Musk and xAI did not respond to a request for comment.Additional reporting by Sarah Jeong.See More:
    Source: https://www.theverge.com/politics/666179/maga-elon-musk-sacks-copyright-office-perlmutter" style="color: #0066cc;">https://www.theverge.com/politics/666179/maga-elon-musk-sacks-copyright-office-perlmutter
    #elon #musks #apparent #power #play #the #copyright #office #completely #backfired
    Elon Musk’s apparent power play at the Copyright Office completely backfired
    What initially appeared to be a power play by Elon Musk and the Department of Government Efficiency (DOGE) to take over the US Copyright Office by having Donald Trump remove the officials in charge has now backfired in spectacular fashion, as Trump’s acting replacements are known to be unfriendly — and even downright hostile — to the tech industry. When Trump fired Librarian of Congress Carla Hayden last week and Register of Copyrights Shira Perlmutter over the weekend, it was seen as another move driven by the tech wing of the Republican Party — especially in light of the Copyright Office releasing a pre-publication report saying some kinds of generative AI training would not be considered fair use. And when two men showed up at the Copyright Office inside the Library of Congress carrying letters purporting to appoint them to acting leadership positions, the DOGE takeover appeared to be complete.But those two men, Paul Perkins and Brian Nieves, were not DOGE at all, but instead approved by the MAGA wing of the Trump coalition that aims to put tech companies in check. Perkins, now the supposed acting Register of Copyrights, is an eight-year veteran of the DOJ who served in the first Trump administration prosecuting fraud cases. Nieves, the putative acting deputy librarian, is currently at the Office of the Deputy Attorney General, having previously been a lawyer on the House Judiciary Committee, where he worked with Rep. Jim Jordan on Big Tech investigations. And Todd Blanche, the putative Acting Librarian of Congress who would be their boss, is a staunch Trump ally who represented him during his 2024 Manhattan criminal trial, and is now the Deputy Attorney General overseeing the DOJ’s side in the Google Search remedies case. As one government affairs lobbyist told The Verge, Blanche is “there to stick it to tech.”The appointments of Blanche, Perkins, and Nieves are the result of furious lobbying over the weekend by the conservative content industry — as jealously protective of its copyrighted works as any other media companies — as well as populist Republican lawmakers and lawyers, all enraged that Silicon Valley had somehow persuaded Trump to fire someone who’d recently criticized AI companies.Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and David SacksThe populists were particularly rankled over Perlmutter’s removal from the helm of the Copyright Office, which happened the day after the agency released a pre-publication version of its report on the use of copyrighted material in training generative AI systems. Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and “White House A.I. & Crypto Czar” David Sacks, meant to eliminate any resistance to AI companies using copyrighted material to train models without having to pay for it.“You can say, well, we have to compete with China. No, we don’t have to steal content to compete with China. We don’t have slave labor to compete with China. It’s a bullshit argument,” Mike Davis, the president of the Article III project and a key antitrust advisor to Trump, told The Verge. “It’s not fair use under the copyright laws to take everyone’s content and have the big tech platforms monetize it. That’s the opposite of fair use. That’s a copyright infringement.”It’s the rare time that MAGA world is in agreement with the Democratic Party, which has roundly condemned the firings of Hayden and Perlmutter, and also zeroed in on the Musk-Sacks faction as the instigator.In a press release, Rep. Joe Morelle (D-NY) characterized the hundred-plus-page report, the third installment of a series that the office has put out on copyright and artificial intelligence, as “refus[ing] to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.” Meanwhile, Sen. Ron Wyden (D-OR), who told The Verge in an emailed statement that the president had no power to fire either Hayden or Perlmutter, said, “This all looks like another way to pay back Elon Musk and the other AI billionaires who backed Trump’s campaign.”The agency’s interpretation of what is or isn’t fair use does not have binding force on the courtsPublications like the AI report essentially lay out how the Copyright Office interprets copyright law. But the agency’s interpretation of what is or isn’t fair use does not have binding force on the courts, so a report like this one functions mostly as expert commentary and reference material. However, the entire AI industry is built on an expansive interpretation of copyright law that’s currently being tested in the courts — a situation that’s created dire need for exactly this sort of expert commentary. The AI report applies the law of fair use to different kinds of AI training and usage, concluding that although outcomes might differ case by case, “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” But far from advising drastic action in response to what the Office believes is rampant copyright infringement, the report instead states that “government intervention would be premature at this time,” given that licensing agreements are being made across various sectors.“Now tech bros are going to steal creators’ copyrights for AP profits”The unoffending nature of the report made Perlmutter’s removal all the more alarming to the MAGA ideologues in Trump’s inner circle, who saw this as a clear power grab, and were immediately vocal about it. “Now tech bros are going to steal creators’ copyrights for AP profits,” Davis posted immediately on Truth Social, along with a link to a CBS story about Perlmutter’s firing. “This is 100% unacceptable.” Curiously, just after Davis published the post, Trump reposted it, link and all. None of Trump’s purported appointees have a particularly relevant background for their new jobs — but they are certainly not DOGE people and, generally speaking, are not the kind of people that generative AI proponents would want in the office. And for now, this counts as a political win for the anti-tech populists, even if nothing further happens. “Sometimes when you make a pitch to leadership to get rid of someone, the person who comes in after isn’t any better,” said a source familiar with the dynamic between the White House and both sides of the copyright issue. “You don’t necessarily get to name the successor and fire someone, and so in many cases, I’ve seen people get pushed out the door and the replacement is even worse.”RelatedThe speed of the firings and subsequent power struggle, however, have underscored the brewing constitutional crisis sparked by Trump’s frequent firing of independent agency officials confirmed by Congress. The Library of Congress firings, in particular, reach well past the theory of executive power claimed by the White House and into even murkier territory. It’s legally dubious whether the Librarian of Congress can be removed by the president, as the Library, a legislative branch agency that significantly predates the administrative state, does not fit neatly into the modern-day legal framework of federal agencies. (Of course, everything about the law is in upheaval even where agencies do fit the framework.) Regardless, the law clearly states that the Librarian of Congress — not the president — appoints the Register of Copyrights.At the moment, the Library of Congress has not received any direction from Congress on how to move forward. The constitutional crisis — one of many across the federal government — remains ongoing. Elon Musk and xAI did not respond to a request for comment.Additional reporting by Sarah Jeong.See More: Source: https://www.theverge.com/politics/666179/maga-elon-musk-sacks-copyright-office-perlmutter #elon #musks #apparent #power #play #the #copyright #office #completely #backfired
    WWW.THEVERGE.COM
    Elon Musk’s apparent power play at the Copyright Office completely backfired
    What initially appeared to be a power play by Elon Musk and the Department of Government Efficiency (DOGE) to take over the US Copyright Office by having Donald Trump remove the officials in charge has now backfired in spectacular fashion, as Trump’s acting replacements are known to be unfriendly — and even downright hostile — to the tech industry. When Trump fired Librarian of Congress Carla Hayden last week and Register of Copyrights Shira Perlmutter over the weekend, it was seen as another move driven by the tech wing of the Republican Party — especially in light of the Copyright Office releasing a pre-publication report saying some kinds of generative AI training would not be considered fair use. And when two men showed up at the Copyright Office inside the Library of Congress carrying letters purporting to appoint them to acting leadership positions, the DOGE takeover appeared to be complete.But those two men, Paul Perkins and Brian Nieves, were not DOGE at all, but instead approved by the MAGA wing of the Trump coalition that aims to put tech companies in check. Perkins, now the supposed acting Register of Copyrights, is an eight-year veteran of the DOJ who served in the first Trump administration prosecuting fraud cases. Nieves, the putative acting deputy librarian, is currently at the Office of the Deputy Attorney General, having previously been a lawyer on the House Judiciary Committee, where he worked with Rep. Jim Jordan on Big Tech investigations. And Todd Blanche, the putative Acting Librarian of Congress who would be their boss, is a staunch Trump ally who represented him during his 2024 Manhattan criminal trial, and is now the Deputy Attorney General overseeing the DOJ’s side in the Google Search remedies case. As one government affairs lobbyist told The Verge, Blanche is “there to stick it to tech.”The appointments of Blanche, Perkins, and Nieves are the result of furious lobbying over the weekend by the conservative content industry — as jealously protective of its copyrighted works as any other media companies — as well as populist Republican lawmakers and lawyers, all enraged that Silicon Valley had somehow persuaded Trump to fire someone who’d recently criticized AI companies.Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and David SacksThe populists were particularly rankled over Perlmutter’s removal from the helm of the Copyright Office, which happened the day after the agency released a pre-publication version of its report on the use of copyrighted material in training generative AI systems. Sources speaking to The Verge are convinced the firings were a tech industry power play led by Elon Musk and “White House A.I. & Crypto Czar” David Sacks, meant to eliminate any resistance to AI companies using copyrighted material to train models without having to pay for it.“You can say, well, we have to compete with China. No, we don’t have to steal content to compete with China. We don’t have slave labor to compete with China. It’s a bullshit argument,” Mike Davis, the president of the Article III project and a key antitrust advisor to Trump, told The Verge. “It’s not fair use under the copyright laws to take everyone’s content and have the big tech platforms monetize it. That’s the opposite of fair use. That’s a copyright infringement.”It’s the rare time that MAGA world is in agreement with the Democratic Party, which has roundly condemned the firings of Hayden and Perlmutter, and also zeroed in on the Musk-Sacks faction as the instigator.In a press release, Rep. Joe Morelle (D-NY) characterized the hundred-plus-page report, the third installment of a series that the office has put out on copyright and artificial intelligence, as “refus[ing] to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.” Meanwhile, Sen. Ron Wyden (D-OR), who told The Verge in an emailed statement that the president had no power to fire either Hayden or Perlmutter, said, “This all looks like another way to pay back Elon Musk and the other AI billionaires who backed Trump’s campaign.”The agency’s interpretation of what is or isn’t fair use does not have binding force on the courtsPublications like the AI report essentially lay out how the Copyright Office interprets copyright law. But the agency’s interpretation of what is or isn’t fair use does not have binding force on the courts, so a report like this one functions mostly as expert commentary and reference material. However, the entire AI industry is built on an expansive interpretation of copyright law that’s currently being tested in the courts — a situation that’s created dire need for exactly this sort of expert commentary. The AI report applies the law of fair use to different kinds of AI training and usage, concluding that although outcomes might differ case by case, “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” But far from advising drastic action in response to what the Office believes is rampant copyright infringement, the report instead states that “government intervention would be premature at this time,” given that licensing agreements are being made across various sectors.“Now tech bros are going to steal creators’ copyrights for AP profits”The unoffending nature of the report made Perlmutter’s removal all the more alarming to the MAGA ideologues in Trump’s inner circle, who saw this as a clear power grab, and were immediately vocal about it. “Now tech bros are going to steal creators’ copyrights for AP profits,” Davis posted immediately on Truth Social, along with a link to a CBS story about Perlmutter’s firing. “This is 100% unacceptable.” Curiously, just after Davis published the post, Trump reposted it, link and all. None of Trump’s purported appointees have a particularly relevant background for their new jobs — but they are certainly not DOGE people and, generally speaking, are not the kind of people that generative AI proponents would want in the office. And for now, this counts as a political win for the anti-tech populists, even if nothing further happens. “Sometimes when you make a pitch to leadership to get rid of someone, the person who comes in after isn’t any better,” said a source familiar with the dynamic between the White House and both sides of the copyright issue. “You don’t necessarily get to name the successor and fire someone, and so in many cases, I’ve seen people get pushed out the door and the replacement is even worse.”RelatedThe speed of the firings and subsequent power struggle, however, have underscored the brewing constitutional crisis sparked by Trump’s frequent firing of independent agency officials confirmed by Congress. The Library of Congress firings, in particular, reach well past the theory of executive power claimed by the White House and into even murkier territory. It’s legally dubious whether the Librarian of Congress can be removed by the president, as the Library, a legislative branch agency that significantly predates the administrative state, does not fit neatly into the modern-day legal framework of federal agencies. (Of course, everything about the law is in upheaval even where agencies do fit the framework.) Regardless, the law clearly states that the Librarian of Congress — not the president — appoints the Register of Copyrights.At the moment, the Library of Congress has not received any direction from Congress on how to move forward. The constitutional crisis — one of many across the federal government — remains ongoing. Elon Musk and xAI did not respond to a request for comment.Additional reporting by Sarah Jeong.See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com