• Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips

    DoorsChristian Marclay
    Institute of Contemporary Art Boston
    Through September 1, 2025Brooklyn Museum

    Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops.

    So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies.
    On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic.
    Christian Marclay, Doors, 2022. Single-channel video projection.
    Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe.

    Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real.
    The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world.
    Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway.

    All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal.
    Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece.
    Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    #christian #marclay #explores #universe #thresholds
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    DoorsChristian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors, 2022. Single-channel video projection. Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS. #christian #marclay #explores #universe #thresholds
    WWW.ARCHPAPER.COM
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    Doors (2022) Christian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors (2022), currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston. (It also premieres June 13 at the Brooklyn Museum and will run through April 12, 2026). Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clock (2010) involved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors (still), 2022. Single-channel video projection (color and black-and-white; 55:00 minutes on continuous loop). Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was the (in my view) equally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025. (Mel Taing) But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    0 Commenti 0 condivisioni
  • Five Climate Issues to Watch When Trump Goes to Canada

    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    #five #climate #issues #watch #when
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals. #five #climate #issues #watch #when
    WWW.SCIENTIFICAMERICAN.COM
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best, [most] realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    0 Commenti 0 condivisioni
  • Medieval cold case is a salacious tale of sex, power, and mayhem

    The murder of John Forde was the culmination to years of political, social, and criminal intrigue.
     

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    Researchers have uncovered handwritten letters, court documents, and a coroner’s report related to the nearly 700-year-old cold case murder of a medieval priest. Published on June 5 in the journal Criminal Law Forum, the investigation draws on direct archival evidence from Cambridge University that is helping fill in the gaps to a high-profile true crime scandal that would make headlines even today. But despite a mountain of firsthand accounts, the murder’s masterminds never saw justice.
    The ‘planned and cold-blooded’ crime
    On Friday, May 3, 1337, Anglican priest John Forde began a walk along downtown London’s Cheapside street after vespersshortly before sunset. At one point, a clergyman familiar to Forde by the name of Hasculph Neville approached him to begin a “pleasant conversation.” As the pair neared St. Paul’s Cathedral, four men ambushed the priest. One of the attackers then proceeded to slit Forde’s throat using a 12-inch dagger as two other assailants stabbed him in the stomach in front of onlookers.
    The vicious crime wasn’t a brazen robbery or politically motivated attack. It was likely a premeditated murder orchestrated by Ela Fitzpayne, a noblewoman, London crime syndicate leader—and potentially Forde’s lover.
    “We are looking at a murder commissioned by a leading figure of the English aristocracy. It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive,” Cambridge University criminology professor Manuel Eisner explained in a statement.
    The location of the murder of John Forde on May 3, 1337. Credit: Medieval Murder Maps / University of Cambridge’s Institute of Criminology / Historic Towns Trust.
    A longstanding feud
    To understand how such a brutal killing could take place in daylight on a busy London street, it’s necessary to backtrack at least five years. In January 1332, the Archbishop of Canterbury sent a letter to the Bishop of Winchester that included a number of reputation-ruining claims surrounding Fitzpayne. In particular, Archbishop Simon Mepham described sexual relationships involving “knights and others, single and married, and even with clerics in holy orders.”
    The wide-ranging punishments for such sinful behavior could include a prohibition on wearing gold and other precious jewelry, as well as large tithes to monastic orders and the poor. But the most humiliating atonement often came in the form of a public walk of shame. The act of contrition involved walking barefoot across Salisbury Cathedral—England’s longest nave—in order to deliver a handcarried, four-pound wax candle to the church altar. What’s more, Archbishop Mepham commanded that Fitzpayne must repeat this penance every autumn for seven years.
    Fitzpayne was having none of it. According to Mepham’s message, the noblewoman chose to continue listening to a “spirit of pride”, and refused to abide by the judgment. A second letter sent by the Archbishop that April also alleged that she had since absconded from her husband, Sir Robert Fitzpayne, and was hiding in London’s Rotherhithe district along the Thames River. Due to this, Archbishop Mepham reported that Ela Fitzpayne had been excommunicated from the church.
    Image of the Archbishop of Canterbury’s letters to the Bishop of Winchester on the subject of Ela Fitzpayne, from the register of John de Stratford. Credit: Hampshire Archives and Hampshire County Council.
    Raids and rats
    But who tipped the clergy off to her indiscretions? According to Eisner’s review of original documents as part of the Cambridge University Institute of Criminology’s Medieval Murder Maps project, it was almost certainly her ex-lover, the soon-to-be-murdered John Forde. He was the only alleged lover named in Archbishop Mepham’s letters, and served as a church rector in a village located on the Fitzpayne family’s estate at the time of the suspected affair. 
    “The archbishop imposed heavy, shameful public penance on Ela, which she seems not to have complied with, but may have sparked a thirst for vengeance,” Eisner said. “Not least as John Forde appears to have escaped punishment by the church.”
    But Forde’s relationship with the Fitzpaynes seems to have extended even more illicit activities. In another record reviewed by Eisner, both Ela Fitzpayne and John Forde had been indicted by a Royal Commission in 1322. The crime–assisting in the raid of a Benedictine priory alongside Sir Fitzpayne. They and others reportedly assaulted the priory a year earlier, making off with around 18 oxen, 30 pigs, and 200 sheep. The monastery coincidentally served as a French abbey’s outpost amid increasing tensions between France and England in the years leading up to the Hundred Years’ War.
    Archbishop Mepham was almost certainly displeased after hearing about the indictment of one of his own clergy. A strict administrator himself, Mepham “was keen to enforce moral discipline among the gentry and nobility,” added Eisner. He theorizes that Forde copped to the affair after getting leaned on by superiors, which subsequently led to the campaign to shame Ela Fitzpayne as a means to reassert the Church’s authority over English nobility. Forde, unfortunately, was caught between the two sides.
    “John Forde may have had split loyalties,” argued Eisner. “One to the Fitzpayne family, who were likely patrons of his church and granted him the position. And the other to the bishops who had authority over him as a clergy member.”
    Archbishop Mepham ultimately wouldn’t live to see the scandal’s full consequences. Fitzpayne never accepted her walk of shame, and the church elder died a year after sending the incriminating letters. Eisner believes the Fitzpaynes greenlit their hit job on Forde only after the dust had seemingly settled. It doesn’t help their case three bystanders said the man who slit the rector’s throat was none other than Ela Fitzpayne’s own brother, Hugh Lovell. They also named two family servants as Forde’s other assailants.
    Archbishop Mepham died four years before Forde’s murder. Credit: ampshire Archives and Hampshire County Council
    Turning a blind eye
    Anyone waiting for justice in this medieval saga will likely be disappointed.
    “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turna blind eye,” Eisner said.
    Eisner explained the circumstances surrounding an initial lack of convictions were simply “implausible.” No one supposedly could locate the accused to bring to trial, despite the men belonging to one of England’s highest nobility houses. Meanwhile, the court claimed Hugh Lovell had no belongings available to confiscate.
    “This was typical of the class-based justice of the day,” said Eisner.
    In the end, the only charge that ever stuck in the murder case was an indictment against one of the family’s former servants. Five years after the first trial in 1342, Hugh Colne was convicted of being one of the men to stab Forde in the stomach and sentenced to the notorious Newgate Prison.
    As dark and sordid as the multiyear medieval drama was, it apparently didn’t change much between Ela Fitzpayne and her husband, Sir Robert. She and the baron remained married until his death in 1354—when she subsequently inherited all his property.
    “Where rule of law is weak, we see killings committed by the highest ranks in society, who will take power into their own hands, whether it’s today or seven centuries ago,” said Eisner.
    That said, the criminology professor couldn’t help but concede that Ela Fitzpayne was an “extraordinary” individual, regardless of the era.
    “A woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” he said. “Ela Fitzpayne appears to have been many things.”
    #medieval #cold #case #salacious #tale
    Medieval cold case is a salacious tale of sex, power, and mayhem
    The murder of John Forde was the culmination to years of political, social, and criminal intrigue.   Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. Researchers have uncovered handwritten letters, court documents, and a coroner’s report related to the nearly 700-year-old cold case murder of a medieval priest. Published on June 5 in the journal Criminal Law Forum, the investigation draws on direct archival evidence from Cambridge University that is helping fill in the gaps to a high-profile true crime scandal that would make headlines even today. But despite a mountain of firsthand accounts, the murder’s masterminds never saw justice. The ‘planned and cold-blooded’ crime On Friday, May 3, 1337, Anglican priest John Forde began a walk along downtown London’s Cheapside street after vespersshortly before sunset. At one point, a clergyman familiar to Forde by the name of Hasculph Neville approached him to begin a “pleasant conversation.” As the pair neared St. Paul’s Cathedral, four men ambushed the priest. One of the attackers then proceeded to slit Forde’s throat using a 12-inch dagger as two other assailants stabbed him in the stomach in front of onlookers. The vicious crime wasn’t a brazen robbery or politically motivated attack. It was likely a premeditated murder orchestrated by Ela Fitzpayne, a noblewoman, London crime syndicate leader—and potentially Forde’s lover. “We are looking at a murder commissioned by a leading figure of the English aristocracy. It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive,” Cambridge University criminology professor Manuel Eisner explained in a statement. The location of the murder of John Forde on May 3, 1337. Credit: Medieval Murder Maps / University of Cambridge’s Institute of Criminology / Historic Towns Trust. A longstanding feud To understand how such a brutal killing could take place in daylight on a busy London street, it’s necessary to backtrack at least five years. In January 1332, the Archbishop of Canterbury sent a letter to the Bishop of Winchester that included a number of reputation-ruining claims surrounding Fitzpayne. In particular, Archbishop Simon Mepham described sexual relationships involving “knights and others, single and married, and even with clerics in holy orders.” The wide-ranging punishments for such sinful behavior could include a prohibition on wearing gold and other precious jewelry, as well as large tithes to monastic orders and the poor. But the most humiliating atonement often came in the form of a public walk of shame. The act of contrition involved walking barefoot across Salisbury Cathedral—England’s longest nave—in order to deliver a handcarried, four-pound wax candle to the church altar. What’s more, Archbishop Mepham commanded that Fitzpayne must repeat this penance every autumn for seven years. Fitzpayne was having none of it. According to Mepham’s message, the noblewoman chose to continue listening to a “spirit of pride”, and refused to abide by the judgment. A second letter sent by the Archbishop that April also alleged that she had since absconded from her husband, Sir Robert Fitzpayne, and was hiding in London’s Rotherhithe district along the Thames River. Due to this, Archbishop Mepham reported that Ela Fitzpayne had been excommunicated from the church. Image of the Archbishop of Canterbury’s letters to the Bishop of Winchester on the subject of Ela Fitzpayne, from the register of John de Stratford. Credit: Hampshire Archives and Hampshire County Council. Raids and rats But who tipped the clergy off to her indiscretions? According to Eisner’s review of original documents as part of the Cambridge University Institute of Criminology’s Medieval Murder Maps project, it was almost certainly her ex-lover, the soon-to-be-murdered John Forde. He was the only alleged lover named in Archbishop Mepham’s letters, and served as a church rector in a village located on the Fitzpayne family’s estate at the time of the suspected affair.  “The archbishop imposed heavy, shameful public penance on Ela, which she seems not to have complied with, but may have sparked a thirst for vengeance,” Eisner said. “Not least as John Forde appears to have escaped punishment by the church.” But Forde’s relationship with the Fitzpaynes seems to have extended even more illicit activities. In another record reviewed by Eisner, both Ela Fitzpayne and John Forde had been indicted by a Royal Commission in 1322. The crime–assisting in the raid of a Benedictine priory alongside Sir Fitzpayne. They and others reportedly assaulted the priory a year earlier, making off with around 18 oxen, 30 pigs, and 200 sheep. The monastery coincidentally served as a French abbey’s outpost amid increasing tensions between France and England in the years leading up to the Hundred Years’ War. Archbishop Mepham was almost certainly displeased after hearing about the indictment of one of his own clergy. A strict administrator himself, Mepham “was keen to enforce moral discipline among the gentry and nobility,” added Eisner. He theorizes that Forde copped to the affair after getting leaned on by superiors, which subsequently led to the campaign to shame Ela Fitzpayne as a means to reassert the Church’s authority over English nobility. Forde, unfortunately, was caught between the two sides. “John Forde may have had split loyalties,” argued Eisner. “One to the Fitzpayne family, who were likely patrons of his church and granted him the position. And the other to the bishops who had authority over him as a clergy member.” Archbishop Mepham ultimately wouldn’t live to see the scandal’s full consequences. Fitzpayne never accepted her walk of shame, and the church elder died a year after sending the incriminating letters. Eisner believes the Fitzpaynes greenlit their hit job on Forde only after the dust had seemingly settled. It doesn’t help their case three bystanders said the man who slit the rector’s throat was none other than Ela Fitzpayne’s own brother, Hugh Lovell. They also named two family servants as Forde’s other assailants. Archbishop Mepham died four years before Forde’s murder. Credit: ampshire Archives and Hampshire County Council Turning a blind eye Anyone waiting for justice in this medieval saga will likely be disappointed. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turna blind eye,” Eisner said. Eisner explained the circumstances surrounding an initial lack of convictions were simply “implausible.” No one supposedly could locate the accused to bring to trial, despite the men belonging to one of England’s highest nobility houses. Meanwhile, the court claimed Hugh Lovell had no belongings available to confiscate. “This was typical of the class-based justice of the day,” said Eisner. In the end, the only charge that ever stuck in the murder case was an indictment against one of the family’s former servants. Five years after the first trial in 1342, Hugh Colne was convicted of being one of the men to stab Forde in the stomach and sentenced to the notorious Newgate Prison. As dark and sordid as the multiyear medieval drama was, it apparently didn’t change much between Ela Fitzpayne and her husband, Sir Robert. She and the baron remained married until his death in 1354—when she subsequently inherited all his property. “Where rule of law is weak, we see killings committed by the highest ranks in society, who will take power into their own hands, whether it’s today or seven centuries ago,” said Eisner. That said, the criminology professor couldn’t help but concede that Ela Fitzpayne was an “extraordinary” individual, regardless of the era. “A woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” he said. “Ela Fitzpayne appears to have been many things.” #medieval #cold #case #salacious #tale
    WWW.POPSCI.COM
    Medieval cold case is a salacious tale of sex, power, and mayhem
    The murder of John Forde was the culmination to years of political, social, and criminal intrigue.   Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. Researchers have uncovered handwritten letters, court documents, and a coroner’s report related to the nearly 700-year-old cold case murder of a medieval priest. Published on June 5 in the journal Criminal Law Forum, the investigation draws on direct archival evidence from Cambridge University that is helping fill in the gaps to a high-profile true crime scandal that would make headlines even today. But despite a mountain of firsthand accounts, the murder’s masterminds never saw justice. The ‘planned and cold-blooded’ crime On Friday, May 3, 1337, Anglican priest John Forde began a walk along downtown London’s Cheapside street after vespers (evening prayers) shortly before sunset. At one point, a clergyman familiar to Forde by the name of Hasculph Neville approached him to begin a “pleasant conversation.” As the pair neared St. Paul’s Cathedral, four men ambushed the priest. One of the attackers then proceeded to slit Forde’s throat using a 12-inch dagger as two other assailants stabbed him in the stomach in front of onlookers. The vicious crime wasn’t a brazen robbery or politically motivated attack. It was likely a premeditated murder orchestrated by Ela Fitzpayne, a noblewoman, London crime syndicate leader—and potentially Forde’s lover. “We are looking at a murder commissioned by a leading figure of the English aristocracy. It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive,” Cambridge University criminology professor Manuel Eisner explained in a statement. The location of the murder of John Forde on May 3, 1337. Credit: Medieval Murder Maps / University of Cambridge’s Institute of Criminology / Historic Towns Trust. A longstanding feud To understand how such a brutal killing could take place in daylight on a busy London street, it’s necessary to backtrack at least five years. In January 1332, the Archbishop of Canterbury sent a letter to the Bishop of Winchester that included a number of reputation-ruining claims surrounding Fitzpayne. In particular, Archbishop Simon Mepham described sexual relationships involving “knights and others, single and married, and even with clerics in holy orders.” The wide-ranging punishments for such sinful behavior could include a prohibition on wearing gold and other precious jewelry, as well as large tithes to monastic orders and the poor. But the most humiliating atonement often came in the form of a public walk of shame. The act of contrition involved walking barefoot across Salisbury Cathedral—England’s longest nave—in order to deliver a handcarried, four-pound wax candle to the church altar. What’s more, Archbishop Mepham commanded that Fitzpayne must repeat this penance every autumn for seven years. Fitzpayne was having none of it. According to Mepham’s message, the noblewoman chose to continue listening to a “spirit of pride” (and the devil), and refused to abide by the judgment. A second letter sent by the Archbishop that April also alleged that she had since absconded from her husband, Sir Robert Fitzpayne, and was hiding in London’s Rotherhithe district along the Thames River. Due to this, Archbishop Mepham reported that Ela Fitzpayne had been excommunicated from the church. Image of the Archbishop of Canterbury’s letters to the Bishop of Winchester on the subject of Ela Fitzpayne, from the register of John de Stratford. Credit: Hampshire Archives and Hampshire County Council. Raids and rats But who tipped the clergy off to her indiscretions? According to Eisner’s review of original documents as part of the Cambridge University Institute of Criminology’s Medieval Murder Maps project, it was almost certainly her ex-lover, the soon-to-be-murdered John Forde. He was the only alleged lover named in Archbishop Mepham’s letters, and served as a church rector in a village located on the Fitzpayne family’s estate at the time of the suspected affair.  “The archbishop imposed heavy, shameful public penance on Ela, which she seems not to have complied with, but may have sparked a thirst for vengeance,” Eisner said. “Not least as John Forde appears to have escaped punishment by the church.” But Forde’s relationship with the Fitzpaynes seems to have extended even more illicit activities. In another record reviewed by Eisner, both Ela Fitzpayne and John Forde had been indicted by a Royal Commission in 1322. The crime–assisting in the raid of a Benedictine priory alongside Sir Fitzpayne. They and others reportedly assaulted the priory a year earlier, making off with around 18 oxen, 30 pigs, and 200 sheep. The monastery coincidentally served as a French abbey’s outpost amid increasing tensions between France and England in the years leading up to the Hundred Years’ War. Archbishop Mepham was almost certainly displeased after hearing about the indictment of one of his own clergy. A strict administrator himself, Mepham “was keen to enforce moral discipline among the gentry and nobility,” added Eisner. He theorizes that Forde copped to the affair after getting leaned on by superiors, which subsequently led to the campaign to shame Ela Fitzpayne as a means to reassert the Church’s authority over English nobility. Forde, unfortunately, was caught between the two sides. “John Forde may have had split loyalties,” argued Eisner. “One to the Fitzpayne family, who were likely patrons of his church and granted him the position. And the other to the bishops who had authority over him as a clergy member.” Archbishop Mepham ultimately wouldn’t live to see the scandal’s full consequences. Fitzpayne never accepted her walk of shame, and the church elder died a year after sending the incriminating letters. Eisner believes the Fitzpaynes greenlit their hit job on Forde only after the dust had seemingly settled. It doesn’t help their case three bystanders said the man who slit the rector’s throat was none other than Ela Fitzpayne’s own brother, Hugh Lovell. They also named two family servants as Forde’s other assailants. Archbishop Mepham died four years before Forde’s murder. Credit: ampshire Archives and Hampshire County Council Turning a blind eye Anyone waiting for justice in this medieval saga will likely be disappointed. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn[ed] a blind eye,” Eisner said. Eisner explained the circumstances surrounding an initial lack of convictions were simply “implausible.” No one supposedly could locate the accused to bring to trial, despite the men belonging to one of England’s highest nobility houses. Meanwhile, the court claimed Hugh Lovell had no belongings available to confiscate. “This was typical of the class-based justice of the day,” said Eisner. In the end, the only charge that ever stuck in the murder case was an indictment against one of the family’s former servants. Five years after the first trial in 1342, Hugh Colne was convicted of being one of the men to stab Forde in the stomach and sentenced to the notorious Newgate Prison. As dark and sordid as the multiyear medieval drama was, it apparently didn’t change much between Ela Fitzpayne and her husband, Sir Robert. She and the baron remained married until his death in 1354—when she subsequently inherited all his property. “Where rule of law is weak, we see killings committed by the highest ranks in society, who will take power into their own hands, whether it’s today or seven centuries ago,” said Eisner. That said, the criminology professor couldn’t help but concede that Ela Fitzpayne was an “extraordinary” individual, regardless of the era. “A woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” he said. “Ela Fitzpayne appears to have been many things.”
    Like
    Love
    Wow
    Angry
    Sad
    378
    0 Commenti 0 condivisioni
  • Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

    There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March. 

    These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

    China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

    For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

    As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

    Set the standard

    It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

    Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

    Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

    “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

    In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

    Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue.

    Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

    What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

    A global address

    Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

    Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

    But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

    Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

    But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

    An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

    Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

    For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

    A super‑app approach

    Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

    ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

    Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

    Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

    Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

    Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

    But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

    Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

    ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

    “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

    Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    #manus #has #kickstarted #agent #boom
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.” #manus #has #kickstarted #agent #boom
    WWW.TECHNOLOGYREVIEW.COM
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    Like
    Love
    Wow
    Sad
    Angry
    421
    0 Commenti 0 condivisioni
  • Trump makes a last-minute backtrack on his pick to lead NASA

    NASA's next mission will be to find a new agency leader, following a dramatic reversal from President Donald Trump. In a post made on Truth Social, the president withdrew his nomination of Jared Isaacman as the head of NASA. As first reported by Semafor, the pullback comes just a few days before Isaacman was due in front of the US Senate for a confirmation vote.
    Trump detailed in the post that he will soon announce another nominee who is more aligned with the president's mission and will "put America First in Space." Liz Huston, a White House spokesperson, said in a statement that it was "essential that the next leader of NASA is in complete alignment with President Trump’s America First agenda." According to The New York Times, unnamed sources attribute the withdrawal to Isaacman's previous donations to "prominent Democrats."
    Besides his role as CEO of payment processing company Shift4, Isaacman has been venturing into the world of commercial space travel. The billionaire businessman has been to space twice now, even serving as the mission commander of the Polaris Dawn mission that was operated by SpaceX and saw the first commercial spacewalk. Isaacman was known as a close ally of Elon Musk, who is the CEO of SpaceX and recently left his White House role as an adviser to the president.This article originally appeared on Engadget at
    #trump #makes #lastminute #backtrack #his
    Trump makes a last-minute backtrack on his pick to lead NASA
    NASA's next mission will be to find a new agency leader, following a dramatic reversal from President Donald Trump. In a post made on Truth Social, the president withdrew his nomination of Jared Isaacman as the head of NASA. As first reported by Semafor, the pullback comes just a few days before Isaacman was due in front of the US Senate for a confirmation vote. Trump detailed in the post that he will soon announce another nominee who is more aligned with the president's mission and will "put America First in Space." Liz Huston, a White House spokesperson, said in a statement that it was "essential that the next leader of NASA is in complete alignment with President Trump’s America First agenda." According to The New York Times, unnamed sources attribute the withdrawal to Isaacman's previous donations to "prominent Democrats." Besides his role as CEO of payment processing company Shift4, Isaacman has been venturing into the world of commercial space travel. The billionaire businessman has been to space twice now, even serving as the mission commander of the Polaris Dawn mission that was operated by SpaceX and saw the first commercial spacewalk. Isaacman was known as a close ally of Elon Musk, who is the CEO of SpaceX and recently left his White House role as an adviser to the president.This article originally appeared on Engadget at #trump #makes #lastminute #backtrack #his
    WWW.ENGADGET.COM
    Trump makes a last-minute backtrack on his pick to lead NASA
    NASA's next mission will be to find a new agency leader, following a dramatic reversal from President Donald Trump. In a post made on Truth Social, the president withdrew his nomination of Jared Isaacman as the head of NASA. As first reported by Semafor, the pullback comes just a few days before Isaacman was due in front of the US Senate for a confirmation vote. Trump detailed in the post that he will soon announce another nominee who is more aligned with the president's mission and will "put America First in Space." Liz Huston, a White House spokesperson, said in a statement that it was "essential that the next leader of NASA is in complete alignment with President Trump’s America First agenda." According to The New York Times, unnamed sources attribute the withdrawal to Isaacman's previous donations to "prominent Democrats." Besides his role as CEO of payment processing company Shift4, Isaacman has been venturing into the world of commercial space travel. The billionaire businessman has been to space twice now, even serving as the mission commander of the Polaris Dawn mission that was operated by SpaceX and saw the first commercial spacewalk. Isaacman was known as a close ally of Elon Musk, who is the CEO of SpaceX and recently left his White House role as an adviser to the president.This article originally appeared on Engadget at https://www.engadget.com/science/space/trump-makes-a-last-minute-backtrack-on-his-pick-to-lead-nasa-153253836.html?src=rss
    0 Commenti 0 condivisioni
  • HBO and Max New Releases: June 2025

    HBO original The Gilded Age returns for a third season on June 22. This series tells a fictionalized story set during America’s Gilded Age. A time of rapidly increasing prosperity and industry, for those lucky enough to capitalize on it. New York City’s social scene is forced to adapt as people with old moneyand those with new moneyclash. Carrie Coon, Morgan Spector, Taissa Farmiga, Cynthia Nixon, and more star in this compelling drama.

    Fans of The Hunger Games series will be happy to find all four movies in the series on Max from the first of the month. If you need a break from Sunrise on the Reaping theories or simply want to revisit the story that started it all, Max is the place to be.
    A Minecraft Movie will also be available to stream on Max this month, though the date has yet to be revealed by Warner Bros. Discovery.
    Here’s everything coming to HBO and Max in June.

    HBO and Max New Releases – June 2025
    June 1
    A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou’ll Find OutZiegfeld FolliesJune 2
    BBQ Brawl, Season 6June 3
    Bullet TrainUgliest House in America, Season 6June 4
    1000-lb Roomies, Season 1Fatal Destination, Season 1June 5
    Bea’s Block, Season 1CChespirito: Not Really on Purpose, Season 1June 6
    House Hunters International: Volume 9, Season 201ParthenopeJune 10
    Virgins, Season 1Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    June 11
    Guy’s Grocery Games, Season 38June 12
    Bitchin’ Rides, Season 11Mini Beat Power Rockers: A Superheroic NightJune 13
    CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BJune 16
    Hero Ball, Season 3BJune 17
    Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1June 19
    Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5June 20
    House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BJune 21
    The Kitchen, Season 38The Never Ever Mets, Season 2June 22The Gilded Age, Season 3June 23
    Match Me Abroad, Season 2June 24
    EnigmaMean Girl Murders, Season 3The InvitationJune 25
    Rehab Addict, Season 10June 27
    House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieJune 29
    #Somebody’s Son, Season 1Family or Fiancé, Season 4June 30
    90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21
    #hbo #max #new #releases #june
    HBO and Max New Releases: June 2025
    HBO original The Gilded Age returns for a third season on June 22. This series tells a fictionalized story set during America’s Gilded Age. A time of rapidly increasing prosperity and industry, for those lucky enough to capitalize on it. New York City’s social scene is forced to adapt as people with old moneyand those with new moneyclash. Carrie Coon, Morgan Spector, Taissa Farmiga, Cynthia Nixon, and more star in this compelling drama. Fans of The Hunger Games series will be happy to find all four movies in the series on Max from the first of the month. If you need a break from Sunrise on the Reaping theories or simply want to revisit the story that started it all, Max is the place to be. A Minecraft Movie will also be available to stream on Max this month, though the date has yet to be revealed by Warner Bros. Discovery. Here’s everything coming to HBO and Max in June. HBO and Max New Releases – June 2025 June 1 A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou’ll Find OutZiegfeld FolliesJune 2 BBQ Brawl, Season 6June 3 Bullet TrainUgliest House in America, Season 6June 4 1000-lb Roomies, Season 1Fatal Destination, Season 1June 5 Bea’s Block, Season 1CChespirito: Not Really on Purpose, Season 1June 6 House Hunters International: Volume 9, Season 201ParthenopeJune 10 Virgins, Season 1Join our mailing list Get the best of Den of Geek delivered right to your inbox! June 11 Guy’s Grocery Games, Season 38June 12 Bitchin’ Rides, Season 11Mini Beat Power Rockers: A Superheroic NightJune 13 CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BJune 16 Hero Ball, Season 3BJune 17 Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1June 19 Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5June 20 House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BJune 21 The Kitchen, Season 38The Never Ever Mets, Season 2June 22The Gilded Age, Season 3June 23 Match Me Abroad, Season 2June 24 EnigmaMean Girl Murders, Season 3The InvitationJune 25 Rehab Addict, Season 10June 27 House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieJune 29 #Somebody’s Son, Season 1Family or Fiancé, Season 4June 30 90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21 #hbo #max #new #releases #june
    WWW.DENOFGEEK.COM
    HBO and Max New Releases: June 2025
    HBO original The Gilded Age returns for a third season on June 22. This series tells a fictionalized story set during America’s Gilded Age. A time of rapidly increasing prosperity and industry, for those lucky enough to capitalize on it. New York City’s social scene is forced to adapt as people with old money (inherited wealth) and those with new money (wealth from rising industries) clash. Carrie Coon, Morgan Spector, Taissa Farmiga, Cynthia Nixon, and more star in this compelling drama. Fans of The Hunger Games series will be happy to find all four movies in the series on Max from the first of the month. If you need a break from Sunrise on the Reaping theories or simply want to revisit the story that started it all, Max is the place to be. A Minecraft Movie will also be available to stream on Max this month, though the date has yet to be revealed by Warner Bros. Discovery. Here’s everything coming to HBO and Max in June. HBO and Max New Releases – June 2025 June 1 A Hologram for the King (2016)A Nightmare on Elm Street (2010)A Perfect Getaway (2009)Backtrack (2016)Batman and Superman: Battle of the Super Sons (2022)Black Patch (1957)Blues in the Night (1941)Casino (1995)Fight Club (1999)Gentleman Jim (1942)Hellboy (2004)I Am Not Your Negro (2017)Igor (2008)Illegal (1955)In the Good Old Summertime (1949)Invasion of the Body Snatchers (1978)Kid Glove Killer (1942)Meet Me in St. Louis (1944)My Scientology Movie (2017)Numbered Men (1930)One Foot in Heaven (1941)Parasite (2019)Presenting Lily Mars (1943)Pride & Prejudice (2005)Public Enemies (2009)Reign of the Supermen (2019)Serenade (1956)Silver River (1948)Spaceballs (1987)Split (2017)Strike Up the Band (1940)Summer Stock (1950)Superman: Man of Tomorrow (2020)Superman: Red Son (2020)Superman: Unbound (2013)Superman/Batman: Public Enemies (2009)Thank Your Lucky Stars (1943)The Death of Superman (2018)The Fighting 69th (1940)The Harvey Girls (1946)The Hunger Games (2012)The Hunger Games: Catching Fire (2013)The Hunger Games: Mockingjay Part 1 (2014)The Hunger Games: Mockingjay Part 2 (2015)The Man Who Invented Christmas (2017)The Match King (1932)The Mayor of Hell (1933)The Mortician (HBO Original)The Nitwits (1935)The Prince and the Pauper (1937)The Sea Chase (1955)The Sea Hawk (1940)The Sunlit Night (2019)The Verdict (1946)They Made Me a Criminal (1939)This Side of the Law (1950)Three Faces East (1930)Three Strangers (1946)Total Drama Island, Season 2 (Cartoon Network)Wagons West (1952)Words and Music (1948)You’ll Find Out (1940)Ziegfeld Follies (1946)June 2 BBQ Brawl, Season 6 (Food Network)June 3 Bullet Train (2022)Ugliest House in America, Season 6 (HGTV)June 4 1000-lb Roomies, Season 1 (TLC)Fatal Destination, Season 1 (ID)June 5 Bea’s Block, Season 1C (Max Original)Chespirito: Not Really on Purpose, Season 1 (Max Original)June 6 House Hunters International: Volume 9, Season 201 (HGTV)Parthenope (A24) June 10 Virgins, Season 1 (TLC) Join our mailing list Get the best of Den of Geek delivered right to your inbox! June 11 Guy’s Grocery Games, Season 38 (Food Network)June 12 Bitchin’ Rides, Season 11Mini Beat Power Rockers: A Superheroic Night (Discovery International)June 13 Cleaner (2025)House Hunters: Volume 10, Season 240 (HGTV)Maine Cabin Masters, Season 10 (Magnolia Network)Super Sara (Max Original)Toad & Friends, Season 1BJune 16 Hero Ball, Season 3BJune 17 Dr. Sanjay Gupta Reports: Animal Pharm (CNN Originals, 2025)Super Mega Cakes, Season 1 (Food Network) June 19 Expedition Unknown, Season 15 (Discovery)Mystery At Blind Frog Ranch, Season 5 (Discovery)June 20 House Hunters: Volume 10, Season 241 (HGTV)Lu & The Bally Bunch, Season 1C (Cartoon Network)Now or Never: FC Montfermeil (Max Original)Teen Titans Go!, Season 9B (Cartoon Network)June 21 The Kitchen, Season 38 (Food Network)The Never Ever Mets, Season 2 (OWN)June 22The Gilded Age, Season 3 (HBO Original) June 23 Match Me Abroad, Season 2 (TLC)June 24 Enigma (HBO Original)Mean Girl Murders, Season 3 (ID)The Invitation (2022) June 25 Rehab Addict, Season 10 (HGTV)June 27 House Hunters: Volume 10, Season 242 (HGTV)My Mom Jayne (HBO Original)Pati, Seasons 1&2 (Max Original)The Day the Earth Blew Up: A Looney Tunes Movie (2025)June 29 #Somebody’s Son, Season 1 (OWN)Family or Fiancé, Season 4 (OWN)June 30 90 Day Fiancé: Pillow Talk, Season 11 (TLC)Truck U, Season 21
    0 Commenti 0 condivisioni
  • CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly

    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back.
    The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough.
    The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1.
    Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do.
    Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both.

    Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red

    "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression.
    "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt'sface just did not match the style of the rest of the characters," he said. "It was not realistic human proportions."
    The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there."

    Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red

    Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said.
    By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said.

    Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red

    That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'"
    Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it?

    Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red

    "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions."
    But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!"
    I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now.
    #projekt #red #tried #redesign #geralt039s
    CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly
    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back. The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough. The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1. Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do. Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both. Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression. "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt'sface just did not match the style of the rest of the characters," he said. "It was not realistic human proportions." The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there." Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said. By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said. Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'" Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it? Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions." But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!" I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now. #projekt #red #tried #redesign #geralt039s
    WWW.EUROGAMER.NET
    CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly
    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back. The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough. The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1. Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do. Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both. Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression. "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt's [existing] face just did not match the style of the rest of the characters," he said. "It was not realistic human proportions." The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there." Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said. By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said. Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'" Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it? Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions." But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!" I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now.
    0 Commenti 0 condivisioni
  • QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    Alibaba Group has introduced QwenLong-L1, a new framework that enables large language modelsto reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts.
    The challenge of long-form reasoning for AI
    Recent advances in large reasoning models, particularly through reinforcement learning, have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks.
    However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contextsremains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper.
    The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information. 
    Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths.
    QwenLong-L1: A multi-stage approach
    QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process:
    Warm-up Supervised Fine-Tuning: The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers.
    Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts.
    Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths.
    QwenLong-L1 process Source: arXiv
    Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards, QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents.
    Putting QwenLong-L1 to the test
    The Alibaba team evaluated QwenLong-L1 using document question-answeringas the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions. 
    Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B modelachieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B. 
    Source: arXiv
    An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding”, “subgoal setting”, “backtracking”, and “verification”.
    For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer.
    Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech, financeand customer service. The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #qwenlongl1 #solves #longcontext #reasoning #challenge
    QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language modelsto reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models, particularly through reinforcement learning, have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contextsremains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths. QwenLong-L1: A multi-stage approach QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process: Warm-up Supervised Fine-Tuning: The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers. Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts. Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths. QwenLong-L1 process Source: arXiv Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards, QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents. Putting QwenLong-L1 to the test The Alibaba team evaluated QwenLong-L1 using document question-answeringas the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions.  Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B modelachieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B.  Source: arXiv An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding”, “subgoal setting”, “backtracking”, and “verification”. For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer. Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech, financeand customer service. The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #qwenlongl1 #solves #longcontext #reasoning #challenge
    VENTUREBEAT.COM
    QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths. QwenLong-L1: A multi-stage approach QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process: Warm-up Supervised Fine-Tuning (SFT): The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers. Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts. Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths. QwenLong-L1 process Source: arXiv Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards (e.g., a correct answer in a math problem), QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents. Putting QwenLong-L1 to the test The Alibaba team evaluated QwenLong-L1 using document question-answering (DocQA) as the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions.  Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B model (based on DeepSeek-R1-Distill-Qwen-32B) achieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B.  Source: arXiv An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding” (linking answers to specific parts of a document), “subgoal setting” (breaking down complex questions), “backtracking” (recognizing and correcting their own mistakes mid-reasoning), and “verification” (double-checking their answers). For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer. Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech (analyzing thousands of pages of legal documents), finance (deep research on annual reports and financial filings for risk assessment or investment opportunities) and customer service (analyzing long customer interaction histories to provide more informed support). The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commenti 0 condivisioni
  • Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models

    While large reasoning modelshave shown impressive capabilities in short-context reasoning through reinforcement learning, these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization.
    QwenLong-L1: A Structured RL Framework for Long-Context Adaptation
    To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages:

    Warm-up Supervised Fine-Tuning: Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction.
    Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates.
    Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs.

    These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training.

    Technical Design and Methodological Advantages
    QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation:

    GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns.
    DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training.

    The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model. This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings.
    Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization.
    Experimental Results and Benchmark Performance
    QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance:

    It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B.
    Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths.
    Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates.

    Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone.
    Conclusion
    QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training.

    Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen
    #qwen #researchers #proposes #qwenlongl1 #reinforcement
    Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models
    While large reasoning modelshave shown impressive capabilities in short-context reasoning through reinforcement learning, these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization. QwenLong-L1: A Structured RL Framework for Long-Context Adaptation To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages: Warm-up Supervised Fine-Tuning: Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction. Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates. Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs. These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training. Technical Design and Methodological Advantages QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation: GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns. DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training. The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model. This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings. Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization. Experimental Results and Benchmark Performance QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance: It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B. Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths. Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates. Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone. Conclusion QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training. Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen #qwen #researchers #proposes #qwenlongl1 #reinforcement
    WWW.MARKTECHPOST.COM
    Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models
    While large reasoning models (LRMs) have shown impressive capabilities in short-context reasoning through reinforcement learning (RL), these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization. QwenLong-L1: A Structured RL Framework for Long-Context Adaptation To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages: Warm-up Supervised Fine-Tuning (SFT): Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction. Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates. Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs. These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training. Technical Design and Methodological Advantages QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation: GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns. DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training. The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model (e.g., Qwen2.5-1.5B). This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings. Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization. Experimental Results and Benchmark Performance QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance: It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B. Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths. Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates. Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone. Conclusion QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training. Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen
    4 Commenti 0 condivisioni
  • What's New on Max in June 2025

    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others.

    On the movie slate, there's A24's Parthenope, a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner, about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movieand the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigmaexplores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jaynefollows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesAvailable June 2BBQ Brawl, Season 6Available June 3Bullet TrainUgliest House in America, Season 6Available June 41000-lb Roomies, Season 1Fatal Destination, Season 1Available June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1Available June 6House Hunters International: Volume 9, Season 201ParthenopeAvailable June 10Virgins, Season 1Available June 11Guy's Grocery Games, Season 38Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic NightAvailable June 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1Available June 19Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5Available June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BAvailable June 21The Kitchen, Season 38The Never Ever Mets, Season 2Available June 22The Gilded Age, Season 3Available June 23Match Me Abroad, Season 2Available June 24EnigmaMean Girl Murders, Season 3The InvitationAvailable June 25Rehab Addict, Season 10Available June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieAvailable June 29#Somebody's Son, Season 1Family or Fiancé, Season 4Available June 30 90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21
    #what039s #new #max #june
    What's New on Max in June 2025
    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others. On the movie slate, there's A24's Parthenope, a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner, about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movieand the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigmaexplores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jaynefollows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesAvailable June 2BBQ Brawl, Season 6Available June 3Bullet TrainUgliest House in America, Season 6Available June 41000-lb Roomies, Season 1Fatal Destination, Season 1Available June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1Available June 6House Hunters International: Volume 9, Season 201ParthenopeAvailable June 10Virgins, Season 1Available June 11Guy's Grocery Games, Season 38Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic NightAvailable June 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1Available June 19Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5Available June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BAvailable June 21The Kitchen, Season 38The Never Ever Mets, Season 2Available June 22The Gilded Age, Season 3Available June 23Match Me Abroad, Season 2Available June 24EnigmaMean Girl Murders, Season 3The InvitationAvailable June 25Rehab Addict, Season 10Available June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieAvailable June 29#Somebody's Son, Season 1Family or Fiancé, Season 4Available June 30 90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21 #what039s #new #max #june
    LIFEHACKER.COM
    What's New on Max in June 2025
    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others. On the movie slate, there's A24's Parthenope (June 6), a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner (June 13), about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movie (June 27) and the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigma (June 24) explores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jayne (June 27) follows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the King (2016)A Nightmare on Elm Street (2010)A Perfect Getaway (2009)Backtrack (2016)Batman and Superman: Battle of the Super Sons (2022)Black Patch (1957)Blues in the Night (1941)Casino (1995)Fight Club (1999)Gentleman Jim (1942)Hellboy (2004)I Am Not Your Negro (2017)Igor (2008)Illegal (1955)In the Good Old Summertime (1949)Invasion of the Body Snatchers (1978)Kid Glove Killer (1942)Meet Me in St. Louis (1944)My Scientology Movie (2017)Numbered Men (1930)One Foot in Heaven (1941)Parasite (2019)Presenting Lily Mars (1943)Pride & Prejudice (2005)Public Enemies (2009)Reign of the Supermen (2019)Serenade (1956)Silver River (1948)Spaceballs (1987)Split (2017)Strike Up the Band (1940)Summer Stock (1950)Superman: Man of Tomorrow (2020)Superman: Red Son (2020)Superman: Unbound (2013)Superman/Batman: Public Enemies (2009)Thank Your Lucky Stars (1943)The Death of Superman (2018)The Fighting 69th (1940)The Harvey Girls (1946)The Hunger Games (2012)The Hunger Games: Catching Fire (2013)The Hunger Games: Mockingjay Part 1 (2014)The Hunger Games: Mockingjay Part 2 (2015)The Man Who Invented Christmas (2017)The Match King (1932)The Mayor of Hell (1933)The Mortician (HBO Original)The Nitwits (1935)The Prince and the Pauper (1937)The Sea Chase (1955)The Sea Hawk (1940)The Sunlit Night (2019)The Verdict (1946)They Made Me a Criminal (1939)This Side of the Law (1950)Three Faces East (1930)Three Strangers (1946)Total Drama Island, Season 2 (Cartoon Network)Wagons West (1952)Words and Music (1948)You'll Find Out (1940)Ziegfeld Follies (1946)Available June 2BBQ Brawl, Season 6 (Food Network)Available June 3Bullet Train (2022)Ugliest House in America, Season 6 (HGTV)Available June 41000-lb Roomies, Season 1 (TLC)Fatal Destination, Season 1 (ID)Available June 5Bea's Block, Season 1C (Max Original)Chespirito: Not Really on Purpose, Season 1 (Max Original)Available June 6House Hunters International: Volume 9, Season 201 (HGTV)Parthenope (A24)Available June 10Virgins, Season 1 (TLC)Available June 11Guy's Grocery Games, Season 38 (Food Network)Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic Night (Discovery International)Available June 13Cleaner (2025)House Hunters: Volume 10, Season 240 (HGTV)Maine Cabin Masters, Season 10 (Magnolia Network)Super Sara (Max Original)Toad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal Pharm (CNN Originals, 2025)Super Mega Cakes, Season 1 (Food Network)Available June 19Expedition Unknown, Season 15 (Discovery)Mystery At Blind Frog Ranch, Season 5 (Discovery)Available June 20House Hunters: Volume 10, Season 241 (HGTV)Lu & The Bally Bunch, Season 1C (Cartoon Network)Now or Never: FC Montfermeil (Max Original) Teen Titans Go!, Season 9B (Cartoon Network)Available June 21The Kitchen, Season 38 (Food Network)The Never Ever Mets, Season 2 (OWN)Available June 22The Gilded Age, Season 3 (HBO Original)Available June 23Match Me Abroad, Season 2 (TLC)Available June 24Enigma (HBO Original)Mean Girl Murders, Season 3 (ID)The Invitation (2022)Available June 25Rehab Addict, Season 10 (HGTV)Available June 27House Hunters: Volume 10, Season 242 (HGTV)My Mom Jayne (HBO Original)Pati, Seasons 1&2 (Max Original)The Day the Earth Blew Up: A Looney Tunes Movie (2025)Available June 29#Somebody's Son, Season 1 (OWN)Family or Fiancé, Season 4 (OWN)Available June 30 90 Day Fiancé: Pillow Talk, Season 11 (TLC)Truck U, Season 21
    0 Commenti 0 condivisioni
Pagine in Evidenza