• Sonic Racing CrossWorlds hands-on preview: It is time to move over Mario

    Not to be outdone by his one-time rival, Sonic’s new racing game takes the fight to Mario with genuinely surprising mechanics we've not seen before in the genreTech20:00, 07 Jun 2025Where will you end up?Who doesn’t love a kart racer? The trouble is, they’ve started to fall into a pretty staid rhythm now. You battle it out for lap one, everything sort of settles down in lap 2, and then lap 3 can be similarly formulaic if you don’t get hit by a power-up or two.While Nintendo Switch 2's launch title Mario Kart World has moved to change this with a system that links tracks together, iconic hedgehog Sonic is doing something a little different with his return to karting.‌Not only does it make for much more chaotic racing, but there’s more going on under the hood than it first seems.‌Tracks are varied, making jumping from one to the other very excitingSonic Racing CrossWorlds initially starts off like most other kart racers. Players pick their character from a starting roster of 23 characters, pick their vehicle, and then head off.And, while the first lap plays out as you’d expect, whoever is winning gets to pick lap 2’s location, meaning racers drive through a Travel Ring and end up on a different track, before coming back for lap 3.Article continues belowGetting ahead of another vehicle so you can pick a track you know better for the next stage of the race is great, as are the ‘Rival’ you’ll be assigned at the start of each Grand Prix.Not only do these racers react more aggressively to you, but they’ll also offer unique dialog when you appear out of nowhere to overtake them, hit them with an item, or fall behind the pack.This track sees you travel through a Dragon‌Once the Grand Prix is done, there’s a chance to secure further points by racing across each track from the prior Grand Prix in a sort of three-lap sprint.In my limited playtime, I was locked alongside my rival for points before pulling out the win thanks to that final spring.More competitive racers may baulk at such randomness creeping into tracks they’ve rehearsed, but it’s a breath of fresh air for the genre and stops those middle laps feeling too predictable.‌Each vehicle can be customised furtherAside from the Travel Rings, it doesn’t hurt that Sonic Racing CrossWorlds is a fantastic racer in its own right.Drifting to earn a boost and pulling off tricks to zip past rivals is great fun, although it did take a moment to knock me out of my Mario Kart muscle memory.‌Vehicles fall into a variety of categories, and each has customisable paint jobs, too, letting you make each feel bespoke. Want a purple car for Big the Cat? Go for it. Looking to add some colour to Shadow’s vehicle? You can do it.There are also gadgets you can use to tie into your playstyle, like hoovering up rings from further away, or simply improving your smallest boost.Is it a bird? Is it a plane? No, it's a hedgehog with a driver's licence!‌The game also brings back the “Land, Sea, and Air” transformation modes for vehicles, meaning one minute you’re driving, then sailing, and then flying.The latter is particularly enjoyable, letting your character of choice navigate jump hoops and tight turns, while there are secrets to find throughout each track to encourage replayability.Sonic’s video games feel like they’re in a pretty good spot at the moment, and CrossWorlds looks to be another fine addition.Article continues belowMuch will hinge on how fun its tracks are, but early signs are very, very promising that this will be a racer that shakes up the genre just as well as anyone else can.Previewed on PS5. Preview access provided by the publisher.‌‌‌
    #sonic #racing #crossworlds #handson #preview
    Sonic Racing CrossWorlds hands-on preview: It is time to move over Mario
    Not to be outdone by his one-time rival, Sonic’s new racing game takes the fight to Mario with genuinely surprising mechanics we've not seen before in the genreTech20:00, 07 Jun 2025Where will you end up?Who doesn’t love a kart racer? The trouble is, they’ve started to fall into a pretty staid rhythm now. You battle it out for lap one, everything sort of settles down in lap 2, and then lap 3 can be similarly formulaic if you don’t get hit by a power-up or two.While Nintendo Switch 2's launch title Mario Kart World has moved to change this with a system that links tracks together, iconic hedgehog Sonic is doing something a little different with his return to karting.‌Not only does it make for much more chaotic racing, but there’s more going on under the hood than it first seems.‌Tracks are varied, making jumping from one to the other very excitingSonic Racing CrossWorlds initially starts off like most other kart racers. Players pick their character from a starting roster of 23 characters, pick their vehicle, and then head off.And, while the first lap plays out as you’d expect, whoever is winning gets to pick lap 2’s location, meaning racers drive through a Travel Ring and end up on a different track, before coming back for lap 3.Article continues belowGetting ahead of another vehicle so you can pick a track you know better for the next stage of the race is great, as are the ‘Rival’ you’ll be assigned at the start of each Grand Prix.Not only do these racers react more aggressively to you, but they’ll also offer unique dialog when you appear out of nowhere to overtake them, hit them with an item, or fall behind the pack.This track sees you travel through a Dragon‌Once the Grand Prix is done, there’s a chance to secure further points by racing across each track from the prior Grand Prix in a sort of three-lap sprint.In my limited playtime, I was locked alongside my rival for points before pulling out the win thanks to that final spring.More competitive racers may baulk at such randomness creeping into tracks they’ve rehearsed, but it’s a breath of fresh air for the genre and stops those middle laps feeling too predictable.‌Each vehicle can be customised furtherAside from the Travel Rings, it doesn’t hurt that Sonic Racing CrossWorlds is a fantastic racer in its own right.Drifting to earn a boost and pulling off tricks to zip past rivals is great fun, although it did take a moment to knock me out of my Mario Kart muscle memory.‌Vehicles fall into a variety of categories, and each has customisable paint jobs, too, letting you make each feel bespoke. Want a purple car for Big the Cat? Go for it. Looking to add some colour to Shadow’s vehicle? You can do it.There are also gadgets you can use to tie into your playstyle, like hoovering up rings from further away, or simply improving your smallest boost.Is it a bird? Is it a plane? No, it's a hedgehog with a driver's licence!‌The game also brings back the “Land, Sea, and Air” transformation modes for vehicles, meaning one minute you’re driving, then sailing, and then flying.The latter is particularly enjoyable, letting your character of choice navigate jump hoops and tight turns, while there are secrets to find throughout each track to encourage replayability.Sonic’s video games feel like they’re in a pretty good spot at the moment, and CrossWorlds looks to be another fine addition.Article continues belowMuch will hinge on how fun its tracks are, but early signs are very, very promising that this will be a racer that shakes up the genre just as well as anyone else can.Previewed on PS5. Preview access provided by the publisher.‌‌‌ #sonic #racing #crossworlds #handson #preview
    Sonic Racing CrossWorlds hands-on preview: It is time to move over Mario
    www.dailystar.co.uk
    Not to be outdone by his one-time rival, Sonic’s new racing game takes the fight to Mario with genuinely surprising mechanics we've not seen before in the genreTech20:00, 07 Jun 2025Where will you end up?Who doesn’t love a kart racer? The trouble is, they’ve started to fall into a pretty staid rhythm now. You battle it out for lap one, everything sort of settles down in lap 2, and then lap 3 can be similarly formulaic if you don’t get hit by a power-up or two.While Nintendo Switch 2's launch title Mario Kart World has moved to change this with a system that links tracks together, iconic hedgehog Sonic is doing something a little different with his return to karting.‌Not only does it make for much more chaotic racing, but there’s more going on under the hood than it first seems.‌Tracks are varied, making jumping from one to the other very excitingSonic Racing CrossWorlds initially starts off like most other kart racers. Players pick their character from a starting roster of 23 characters, pick their vehicle, and then head off.And, while the first lap plays out as you’d expect, whoever is winning gets to pick lap 2’s location, meaning racers drive through a Travel Ring and end up on a different track, before coming back for lap 3.Article continues belowGetting ahead of another vehicle so you can pick a track you know better for the next stage of the race is great, as are the ‘Rival’ you’ll be assigned at the start of each Grand Prix.Not only do these racers react more aggressively to you, but they’ll also offer unique dialog when you appear out of nowhere to overtake them, hit them with an item, or fall behind the pack.This track sees you travel through a Dragon‌Once the Grand Prix is done, there’s a chance to secure further points by racing across each track from the prior Grand Prix in a sort of three-lap sprint.In my limited playtime, I was locked alongside my rival for points before pulling out the win thanks to that final spring.More competitive racers may baulk at such randomness creeping into tracks they’ve rehearsed, but it’s a breath of fresh air for the genre and stops those middle laps feeling too predictable.‌Each vehicle can be customised furtherAside from the Travel Rings, it doesn’t hurt that Sonic Racing CrossWorlds is a fantastic racer in its own right.Drifting to earn a boost and pulling off tricks to zip past rivals is great fun, although it did take a moment to knock me out of my Mario Kart muscle memory.‌Vehicles fall into a variety of categories, and each has customisable paint jobs, too, letting you make each feel bespoke. Want a purple car for Big the Cat? Go for it. Looking to add some colour to Shadow’s vehicle? You can do it.There are also gadgets you can use to tie into your playstyle, like hoovering up rings from further away, or simply improving your smallest boost.Is it a bird? Is it a plane? No, it's a hedgehog with a driver's licence!‌The game also brings back the “Land, Sea, and Air” transformation modes for vehicles, meaning one minute you’re driving, then sailing, and then flying.The latter is particularly enjoyable, letting your character of choice navigate jump hoops and tight turns, while there are secrets to find throughout each track to encourage replayability.Sonic’s video games feel like they’re in a pretty good spot at the moment, and CrossWorlds looks to be another fine addition.Article continues belowMuch will hinge on how fun its tracks are, but early signs are very, very promising that this will be a racer that shakes up the genre just as well as anyone else can.Previewed on PS5. Preview access provided by the publisher.‌‌‌
    Like
    Love
    Wow
    Sad
    Angry
    648
    · 0 Комментарии ·0 Поделились ·0 предпросмотр
  • How NPR’s Tiny Desk became the biggest stage in music

    Until last October, Argentinian musical duo Ca7riel & Paco Amoroso were more or less a regional act. Known for their experimental blend of Latin trap, pop, and rap, the pair had a fanbase, but still weren’t cracking more than 3,000 daily streams across services like Spotify, Apple Music, and YouTube. Within a week, they shot up 4,700%—hitting 222,000 daily streams—according to exclusive data firm Luminate, which powers the Billboard charts. Suddenly Ca7riel & Paco Amoroso were global pop stars. 

    What changed? On Oct. 4, the pair were featured in a Tiny Desk Concert, part of NPR’s 17-year-old video series featuring musicians performing stripped-down sets behind an office desk in the cramped Washington, D.C. headquarters of the public broadcaster. 

    In the concert video, the artists play five songs from their debut album Baño Maria, which came out last April. Paco’s raspy voice emerges from underneath a puffy blue trapper hat while Ca7riel sports an over-the-top pout and a vest made of stitched-together heart-shaped plush toys. The pair sing entirely in Spanish, backed by their Argentinian bandmatesand an American horn section. The duo’s performance quickly took off across the internet. Within five days, it had racked up more than 1.5 million views on YouTube, and hit 11 million in little more than a month. It also reverberated across social media: the NPR Music Instagram post garnering nearly 900,000 likes, and TikToks clips garnered hundreds of thousands of views. 

    In a year that featured Tiny Desk performances from buzzy stars like Chappell Roan and Sabrina Carpenter, as well as established acts like Chaka Khan and Nelly Furtado, Ca7riel & Paco Amoroso’s concert was the most-watched of 2024. It currently sits at 36 million views. 

    That virality translated to an influx of bookings for the duo, including a performance at Coachella in April, and upcoming slots at Glastonbury in June, FujiRock Japan in July, and Lollapalooza and Outside Lands in August. Ca7riel & Paco Amoroso’s global tour includes sold-out dates at Mexico’s 20,000-capacity Palacio de los Deportes and Chile’s 14,000-seat Movistar Areana—and was previewed by an appearance on The Tonight Show Starring Jimmy Fallon in April. 

    “Through Tiny Desk, we’ve noticed media approaching us, promoters being very interested in offering their spaces and festivals, and many media outlets opening doors to show us to the world,” says Jonathan Izquierdo, the band’s Spain-based tour manager who began working with the duo shortly after the Tiny Desk Concert debuted. “We’ve managed to sell out summer arena shows in record time and we’re constantly adding new concerts. Promoters are knocking on our doors to get the Tiny Desk effect.”

    Bobby CarterTiny Desk, Big Influence

    The Tiny Desk effect is something Bobby Carter, NPR Tiny Desk host and series producer, has seen firsthand. Carter has been at NPR for 25 years, including the past 11 on the Tiny Desk team. He took the reins when Bob Boilen, the longtime All Songs Considered host who launched Tiny Desk in 2008, retired in 2023. 

    The series—which now has more than 1,200 videos—began as an internet-first way for Boilen to showcase performances from musicians that were more intimate than what happens in bigger concert venues. The first installment, featuring folk artist Laura Gibson, went up on YouTube. Today, the concerts are posted on the NPR site with a writeup and credits, as well as YouTube, where NPR Music has 11 million followers. NPR Music also clips installments on Instagram, where it has 3 million followers. 

    In the early days, NPR staff reached out to touring bands to secure bookings. Acts coming through DC could often be cajoled into filming an installment before heading out to their venues for that night’s sound check. Now, musicians come to DC just for the chance to record in NPR’s offices. 

    “We don’t have to worry about tours anymore,” Carter says. “Labels and artists are willing to come in solely for a Tiny Desk performance. They understand the impact that a really good Tiny Desk concert can have on an artist’s career.”

    Early on, the stripped-down nature of the Tiny Desk—artists can’t use any audio processing or voice modulation—lent itself to rock, folk, and indie acts. But a 2014 concert with T-Pain, in which the famously autotune-heavy singer unveiled an impressive set of pipes, showed how artists from a broader array of genres could shine behind the Tiny Desk. 

    “Everyone knows at this point that they’re going to have to do something different in our space,” Carter says. “It’s a bigger ask for hip-hop acts and electronic acts, but most artists now understand how important it can be if they nail it.”

    Carter highlights rapper Doechii as an artist who overhauled her sound for her Tiny Desk concert in December. Doechii’s all-female backing band used trumpet, saxophone, guitar, and bass to transform songs from her mixtape Alligator Bites Never Heal for the live setting. “If you listen to the recorded version of her music, it’s nothing like what you saw in that Tiny Desk,” Carter says. 

    Clips of Doechii’s Tiny Desk virtuosity lit up social media, introducing the ‘swamp princess’ to new fans. The concert even inspired a viral parody, with writer-director-comedian Gus Heagary pretending to be an NPR staffer watching the performance.   

    Reimagining Old Favorites

    It isn’t just emerging acts that totally revamp their sound for a Tiny Desk opportunity. Established artists like Usher, Justin Timberlake, and Cypress Hill have followed T-Pain’s lead and used NPR’s offices to showcase reimagined versions of some of their most popular songs. When Juvenile recorded his installment in June 2023, he was backed by horns and saxophones, a violin and cello, and John Batiste on melodica. The New Orleans rapper played an acoustic version of “Back That Azz Up” twice at the audience’s request—the first encore in the series’ history. 

    “I love what has happened with hip hop,” Carter says. He explains that artists now approach the concert with the mindset: ‘I have to really rethink what I’ve been doing for however long I’ve been doing it, and present it in a whole new way.” 

    Tiny Desk has also helped musicians like Juvenile, gospel artist Marvin Sapp, and percussionist Sheila E to reach new audiences while reminding listeners they’re still making music. “We’re helping artists to re-emerge,” Carter says, “tapping into legacy acts and evergreen artistsbreathe new life into their careers.”

    In many ways, Tiny Desk now occupies a niche once filled by MTV Unplugged—but for the generation that has replaced cable with YouTube and streaming.  

    “Maybe 10, 15, 20 years ago, all of our favorite artists had this watershed moment in terms of a live performance,” Carter says. “Back in the day it was MTV Unplugged. SNL is still doing their thing. But when you think about the generation now that lives on YouTube, some of these Tiny Desk performances are going to be the milestone that people point to when it comes to live performances.”

    Building a Diverse Audience

    When Carter talks about Tiny Desk concerts reaching a new generation of listeners, it’s not conjecture. He notes that the NPR Music YouTube channel’s 11 million subscribers are “as young and diverse as it gets. It’s almost half people of colormuch younger than the audience that listens to NPR on air, which is an audience NPR has been trying to tap for a long time,” he says. 

    That diversity informs some of the special series that Tiny Desk produces. The Juvenile video was part of Carter’s second run of concerts recorded for Black Music Month, in June. Ca7riel & Paco Amoroso’s video was tied to El Tiny, a Latin-focused series that debuts during Latin Heritage Monthand is programmed by Tiny Desk producer and Alt.Latino host AnaMaria Sayer. 

    Ca7riel & Paco Amoroso’s tour manager, Izquierdo, has worked with artists featured in the series before. He says Tiny Desk is crucial for Latin American artists trying to break through. “I’ve realized that for U.S. radio, Latin music benefits from Tiny Desk,” he says.

    The Tiny Desk audience’s broad demographics are also increasingly reflected in its broader programming. Bad Bunny’s April installment took his reggaeton-inspired songs from recent album Debi Tirar Mas Fotos to their acoustic roots, using an array of traditional Puerto Rican, Latin American, and Caribbean instruments, such as the cuatro puertorriqueño, tiple, güicharo, and bongos.  “audience informs a whole lot of what we do,” Carter says. I get so many pointers from YouTube comments like ‘Have you heard of this artist?’ We’re watching all that stuff because it helps us stay sharp.”

    Tiny Desk heard round the world

    With a strong global audience, Tiny Desk has been expanding into Asia. In 2023, NPR struck a licensing deal with South Korean Telecom LG U+ and production company Something Special to produce Tiny Desk Korea for television. Last year, NPR inked a deal with the Japan Broadcasting Corporationto launch Tiny Desk Concerts Japan. “We’re really expanding in terms of global reach,” Carter says. 

    Here in the States, Carter and Sayer recently launched Tiny Desk Radio, a series that will revisit some of the series’ notable installments, sharing behind-the-scenes stories from their productions and playing the audio from the concerts “Our engineers put a lot of time and effort into making sure that we sound great,” Carter says. “I hear it a lot—people tell me they prefer an artist’s Tiny Desk over anything.”

    That’s something Ca7riel & Paco Amoroso clearly have on their mind as they navigate the Tiny Desk effect and a new level of recognition. The duo released an EP in February, Papota, which features four new songs, plus the recorded versions of their pared-down Tiny Desk performances. They also released a short film that recreates their Tiny Desk performance—this time in a Buenos Aires diner.

    One of the themes of the EP is the pair wrestling with the implications of their viral success. On the song Impostor, Ca7riel asks “¿Y ahora que vamos hacer?/El tiny desk me jodio”It’s an overstatement, but an acknowledgment that the path they’re now on ran directly through the NPR offices. 
    #how #nprs #tiny #desk #became
    How NPR’s Tiny Desk became the biggest stage in music
    Until last October, Argentinian musical duo Ca7riel & Paco Amoroso were more or less a regional act. Known for their experimental blend of Latin trap, pop, and rap, the pair had a fanbase, but still weren’t cracking more than 3,000 daily streams across services like Spotify, Apple Music, and YouTube. Within a week, they shot up 4,700%—hitting 222,000 daily streams—according to exclusive data firm Luminate, which powers the Billboard charts. Suddenly Ca7riel & Paco Amoroso were global pop stars.  What changed? On Oct. 4, the pair were featured in a Tiny Desk Concert, part of NPR’s 17-year-old video series featuring musicians performing stripped-down sets behind an office desk in the cramped Washington, D.C. headquarters of the public broadcaster.  In the concert video, the artists play five songs from their debut album Baño Maria, which came out last April. Paco’s raspy voice emerges from underneath a puffy blue trapper hat while Ca7riel sports an over-the-top pout and a vest made of stitched-together heart-shaped plush toys. The pair sing entirely in Spanish, backed by their Argentinian bandmatesand an American horn section. The duo’s performance quickly took off across the internet. Within five days, it had racked up more than 1.5 million views on YouTube, and hit 11 million in little more than a month. It also reverberated across social media: the NPR Music Instagram post garnering nearly 900,000 likes, and TikToks clips garnered hundreds of thousands of views.  In a year that featured Tiny Desk performances from buzzy stars like Chappell Roan and Sabrina Carpenter, as well as established acts like Chaka Khan and Nelly Furtado, Ca7riel & Paco Amoroso’s concert was the most-watched of 2024. It currently sits at 36 million views.  That virality translated to an influx of bookings for the duo, including a performance at Coachella in April, and upcoming slots at Glastonbury in June, FujiRock Japan in July, and Lollapalooza and Outside Lands in August. Ca7riel & Paco Amoroso’s global tour includes sold-out dates at Mexico’s 20,000-capacity Palacio de los Deportes and Chile’s 14,000-seat Movistar Areana—and was previewed by an appearance on The Tonight Show Starring Jimmy Fallon in April.  “Through Tiny Desk, we’ve noticed media approaching us, promoters being very interested in offering their spaces and festivals, and many media outlets opening doors to show us to the world,” says Jonathan Izquierdo, the band’s Spain-based tour manager who began working with the duo shortly after the Tiny Desk Concert debuted. “We’ve managed to sell out summer arena shows in record time and we’re constantly adding new concerts. Promoters are knocking on our doors to get the Tiny Desk effect.” Bobby CarterTiny Desk, Big Influence The Tiny Desk effect is something Bobby Carter, NPR Tiny Desk host and series producer, has seen firsthand. Carter has been at NPR for 25 years, including the past 11 on the Tiny Desk team. He took the reins when Bob Boilen, the longtime All Songs Considered host who launched Tiny Desk in 2008, retired in 2023.  The series—which now has more than 1,200 videos—began as an internet-first way for Boilen to showcase performances from musicians that were more intimate than what happens in bigger concert venues. The first installment, featuring folk artist Laura Gibson, went up on YouTube. Today, the concerts are posted on the NPR site with a writeup and credits, as well as YouTube, where NPR Music has 11 million followers. NPR Music also clips installments on Instagram, where it has 3 million followers.  In the early days, NPR staff reached out to touring bands to secure bookings. Acts coming through DC could often be cajoled into filming an installment before heading out to their venues for that night’s sound check. Now, musicians come to DC just for the chance to record in NPR’s offices.  “We don’t have to worry about tours anymore,” Carter says. “Labels and artists are willing to come in solely for a Tiny Desk performance. They understand the impact that a really good Tiny Desk concert can have on an artist’s career.” Early on, the stripped-down nature of the Tiny Desk—artists can’t use any audio processing or voice modulation—lent itself to rock, folk, and indie acts. But a 2014 concert with T-Pain, in which the famously autotune-heavy singer unveiled an impressive set of pipes, showed how artists from a broader array of genres could shine behind the Tiny Desk.  “Everyone knows at this point that they’re going to have to do something different in our space,” Carter says. “It’s a bigger ask for hip-hop acts and electronic acts, but most artists now understand how important it can be if they nail it.” Carter highlights rapper Doechii as an artist who overhauled her sound for her Tiny Desk concert in December. Doechii’s all-female backing band used trumpet, saxophone, guitar, and bass to transform songs from her mixtape Alligator Bites Never Heal for the live setting. “If you listen to the recorded version of her music, it’s nothing like what you saw in that Tiny Desk,” Carter says.  Clips of Doechii’s Tiny Desk virtuosity lit up social media, introducing the ‘swamp princess’ to new fans. The concert even inspired a viral parody, with writer-director-comedian Gus Heagary pretending to be an NPR staffer watching the performance.    Reimagining Old Favorites It isn’t just emerging acts that totally revamp their sound for a Tiny Desk opportunity. Established artists like Usher, Justin Timberlake, and Cypress Hill have followed T-Pain’s lead and used NPR’s offices to showcase reimagined versions of some of their most popular songs. When Juvenile recorded his installment in June 2023, he was backed by horns and saxophones, a violin and cello, and John Batiste on melodica. The New Orleans rapper played an acoustic version of “Back That Azz Up” twice at the audience’s request—the first encore in the series’ history.  “I love what has happened with hip hop,” Carter says. He explains that artists now approach the concert with the mindset: ‘I have to really rethink what I’ve been doing for however long I’ve been doing it, and present it in a whole new way.”  Tiny Desk has also helped musicians like Juvenile, gospel artist Marvin Sapp, and percussionist Sheila E to reach new audiences while reminding listeners they’re still making music. “We’re helping artists to re-emerge,” Carter says, “tapping into legacy acts and evergreen artistsbreathe new life into their careers.” In many ways, Tiny Desk now occupies a niche once filled by MTV Unplugged—but for the generation that has replaced cable with YouTube and streaming.   “Maybe 10, 15, 20 years ago, all of our favorite artists had this watershed moment in terms of a live performance,” Carter says. “Back in the day it was MTV Unplugged. SNL is still doing their thing. But when you think about the generation now that lives on YouTube, some of these Tiny Desk performances are going to be the milestone that people point to when it comes to live performances.” Building a Diverse Audience When Carter talks about Tiny Desk concerts reaching a new generation of listeners, it’s not conjecture. He notes that the NPR Music YouTube channel’s 11 million subscribers are “as young and diverse as it gets. It’s almost half people of colormuch younger than the audience that listens to NPR on air, which is an audience NPR has been trying to tap for a long time,” he says.  That diversity informs some of the special series that Tiny Desk produces. The Juvenile video was part of Carter’s second run of concerts recorded for Black Music Month, in June. Ca7riel & Paco Amoroso’s video was tied to El Tiny, a Latin-focused series that debuts during Latin Heritage Monthand is programmed by Tiny Desk producer and Alt.Latino host AnaMaria Sayer.  Ca7riel & Paco Amoroso’s tour manager, Izquierdo, has worked with artists featured in the series before. He says Tiny Desk is crucial for Latin American artists trying to break through. “I’ve realized that for U.S. radio, Latin music benefits from Tiny Desk,” he says. The Tiny Desk audience’s broad demographics are also increasingly reflected in its broader programming. Bad Bunny’s April installment took his reggaeton-inspired songs from recent album Debi Tirar Mas Fotos to their acoustic roots, using an array of traditional Puerto Rican, Latin American, and Caribbean instruments, such as the cuatro puertorriqueño, tiple, güicharo, and bongos.  “audience informs a whole lot of what we do,” Carter says. I get so many pointers from YouTube comments like ‘Have you heard of this artist?’ We’re watching all that stuff because it helps us stay sharp.” Tiny Desk heard round the world With a strong global audience, Tiny Desk has been expanding into Asia. In 2023, NPR struck a licensing deal with South Korean Telecom LG U+ and production company Something Special to produce Tiny Desk Korea for television. Last year, NPR inked a deal with the Japan Broadcasting Corporationto launch Tiny Desk Concerts Japan. “We’re really expanding in terms of global reach,” Carter says.  Here in the States, Carter and Sayer recently launched Tiny Desk Radio, a series that will revisit some of the series’ notable installments, sharing behind-the-scenes stories from their productions and playing the audio from the concerts “Our engineers put a lot of time and effort into making sure that we sound great,” Carter says. “I hear it a lot—people tell me they prefer an artist’s Tiny Desk over anything.” That’s something Ca7riel & Paco Amoroso clearly have on their mind as they navigate the Tiny Desk effect and a new level of recognition. The duo released an EP in February, Papota, which features four new songs, plus the recorded versions of their pared-down Tiny Desk performances. They also released a short film that recreates their Tiny Desk performance—this time in a Buenos Aires diner. One of the themes of the EP is the pair wrestling with the implications of their viral success. On the song Impostor, Ca7riel asks “¿Y ahora que vamos hacer?/El tiny desk me jodio”It’s an overstatement, but an acknowledgment that the path they’re now on ran directly through the NPR offices.  #how #nprs #tiny #desk #became
    How NPR’s Tiny Desk became the biggest stage in music
    www.fastcompany.com
    Until last October, Argentinian musical duo Ca7riel & Paco Amoroso were more or less a regional act. Known for their experimental blend of Latin trap, pop, and rap, the pair had a fanbase, but still weren’t cracking more than 3,000 daily streams across services like Spotify, Apple Music, and YouTube. Within a week, they shot up 4,700%—hitting 222,000 daily streams—according to exclusive data firm Luminate, which powers the Billboard charts. Suddenly Ca7riel & Paco Amoroso were global pop stars.  What changed? On Oct. 4, the pair were featured in a Tiny Desk Concert, part of NPR’s 17-year-old video series featuring musicians performing stripped-down sets behind an office desk in the cramped Washington, D.C. headquarters of the public broadcaster.  In the concert video, the artists play five songs from their debut album Baño Maria, which came out last April. Paco’s raspy voice emerges from underneath a puffy blue trapper hat while Ca7riel sports an over-the-top pout and a vest made of stitched-together heart-shaped plush toys. The pair sing entirely in Spanish, backed by their Argentinian bandmates (sporting shirts screenprinted with their visas) and an American horn section. The duo’s performance quickly took off across the internet. Within five days, it had racked up more than 1.5 million views on YouTube, and hit 11 million in little more than a month. It also reverberated across social media: the NPR Music Instagram post garnering nearly 900,000 likes, and TikToks clips garnered hundreds of thousands of views.  In a year that featured Tiny Desk performances from buzzy stars like Chappell Roan and Sabrina Carpenter, as well as established acts like Chaka Khan and Nelly Furtado, Ca7riel & Paco Amoroso’s concert was the most-watched of 2024. It currently sits at 36 million views.  That virality translated to an influx of bookings for the duo, including a performance at Coachella in April, and upcoming slots at Glastonbury in June, FujiRock Japan in July, and Lollapalooza and Outside Lands in August. Ca7riel & Paco Amoroso’s global tour includes sold-out dates at Mexico’s 20,000-capacity Palacio de los Deportes and Chile’s 14,000-seat Movistar Areana—and was previewed by an appearance on The Tonight Show Starring Jimmy Fallon in April.  “Through Tiny Desk, we’ve noticed media approaching us, promoters being very interested in offering their spaces and festivals, and many media outlets opening doors to show us to the world,” says Jonathan Izquierdo, the band’s Spain-based tour manager who began working with the duo shortly after the Tiny Desk Concert debuted. “We’ve managed to sell out summer arena shows in record time and we’re constantly adding new concerts. Promoters are knocking on our doors to get the Tiny Desk effect.” Bobby Carter [Photo: Fenn Paider/courtesy NPR] Tiny Desk, Big Influence The Tiny Desk effect is something Bobby Carter, NPR Tiny Desk host and series producer, has seen firsthand. Carter has been at NPR for 25 years, including the past 11 on the Tiny Desk team. He took the reins when Bob Boilen, the longtime All Songs Considered host who launched Tiny Desk in 2008, retired in 2023.  The series—which now has more than 1,200 videos—began as an internet-first way for Boilen to showcase performances from musicians that were more intimate than what happens in bigger concert venues. The first installment, featuring folk artist Laura Gibson, went up on YouTube. Today, the concerts are posted on the NPR site with a writeup and credits, as well as YouTube, where NPR Music has 11 million followers. NPR Music also clips installments on Instagram, where it has 3 million followers.  In the early days, NPR staff reached out to touring bands to secure bookings. Acts coming through DC could often be cajoled into filming an installment before heading out to their venues for that night’s sound check. Now, musicians come to DC just for the chance to record in NPR’s offices.  “We don’t have to worry about tours anymore,” Carter says. “Labels and artists are willing to come in solely for a Tiny Desk performance. They understand the impact that a really good Tiny Desk concert can have on an artist’s career.” Early on, the stripped-down nature of the Tiny Desk—artists can’t use any audio processing or voice modulation—lent itself to rock, folk, and indie acts. But a 2014 concert with T-Pain, in which the famously autotune-heavy singer unveiled an impressive set of pipes, showed how artists from a broader array of genres could shine behind the Tiny Desk.  “Everyone knows at this point that they’re going to have to do something different in our space,” Carter says. “It’s a bigger ask for hip-hop acts and electronic acts, but most artists now understand how important it can be if they nail it.” Carter highlights rapper Doechii as an artist who overhauled her sound for her Tiny Desk concert in December. Doechii’s all-female backing band used trumpet, saxophone, guitar, and bass to transform songs from her mixtape Alligator Bites Never Heal for the live setting. “If you listen to the recorded version of her music, it’s nothing like what you saw in that Tiny Desk,” Carter says.  Clips of Doechii’s Tiny Desk virtuosity lit up social media, introducing the ‘swamp princess’ to new fans. The concert even inspired a viral parody, with writer-director-comedian Gus Heagary pretending to be an NPR staffer watching the performance.    Reimagining Old Favorites It isn’t just emerging acts that totally revamp their sound for a Tiny Desk opportunity. Established artists like Usher, Justin Timberlake, and Cypress Hill have followed T-Pain’s lead and used NPR’s offices to showcase reimagined versions of some of their most popular songs. When Juvenile recorded his installment in June 2023, he was backed by horns and saxophones, a violin and cello, and John Batiste on melodica. The New Orleans rapper played an acoustic version of “Back That Azz Up” twice at the audience’s request—the first encore in the series’ history.  “I love what has happened with hip hop [on Tiny Desk],” Carter says. He explains that artists now approach the concert with the mindset: ‘I have to really rethink what I’ve been doing for however long I’ve been doing it, and present it in a whole new way.”  Tiny Desk has also helped musicians like Juvenile, gospel artist Marvin Sapp, and percussionist Sheila E to reach new audiences while reminding listeners they’re still making music. “We’re helping artists to re-emerge,” Carter says, “tapping into legacy acts and evergreen artists [to help] breathe new life into their careers.” In many ways, Tiny Desk now occupies a niche once filled by MTV Unplugged—but for the generation that has replaced cable with YouTube and streaming.   “Maybe 10, 15, 20 years ago, all of our favorite artists had this watershed moment in terms of a live performance,” Carter says. “Back in the day it was MTV Unplugged. SNL is still doing their thing. But when you think about the generation now that lives on YouTube, some of these Tiny Desk performances are going to be the milestone that people point to when it comes to live performances.” Building a Diverse Audience When Carter talks about Tiny Desk concerts reaching a new generation of listeners, it’s not conjecture. He notes that the NPR Music YouTube channel’s 11 million subscribers are “as young and diverse as it gets. It’s almost half people of color [and] much younger than the audience that listens to NPR on air, which is an audience NPR has been trying to tap for a long time,” he says.  That diversity informs some of the special series that Tiny Desk produces. The Juvenile video was part of Carter’s second run of concerts recorded for Black Music Month, in June. Ca7riel & Paco Amoroso’s video was tied to El Tiny, a Latin-focused series that debuts during Latin Heritage Month (from mid September to mid October) and is programmed by Tiny Desk producer and Alt.Latino host AnaMaria Sayer.  Ca7riel & Paco Amoroso’s tour manager, Izquierdo, has worked with artists featured in the series before. He says Tiny Desk is crucial for Latin American artists trying to break through. “I’ve realized that for U.S. radio, Latin music benefits from Tiny Desk,” he says. The Tiny Desk audience’s broad demographics are also increasingly reflected in its broader programming. Bad Bunny’s April installment took his reggaeton-inspired songs from recent album Debi Tirar Mas Fotos to their acoustic roots, using an array of traditional Puerto Rican, Latin American, and Caribbean instruments, such as the cuatro puertorriqueño, tiple, güicharo, and bongos.  “[Our] audience informs a whole lot of what we do,” Carter says. I get so many pointers from YouTube comments like ‘Have you heard of this artist?’ We’re watching all that stuff because it helps us stay sharp.” Tiny Desk heard round the world With a strong global audience, Tiny Desk has been expanding into Asia. In 2023, NPR struck a licensing deal with South Korean Telecom LG U+ and production company Something Special to produce Tiny Desk Korea for television. Last year, NPR inked a deal with the Japan Broadcasting Corporation (NHK) to launch Tiny Desk Concerts Japan. “We’re really expanding in terms of global reach,” Carter says.  Here in the States, Carter and Sayer recently launched Tiny Desk Radio, a series that will revisit some of the series’ notable installments, sharing behind-the-scenes stories from their productions and playing the audio from the concerts “Our engineers put a lot of time and effort into making sure that we sound great,” Carter says. “I hear it a lot—people tell me they prefer an artist’s Tiny Desk over anything.” That’s something Ca7riel & Paco Amoroso clearly have on their mind as they navigate the Tiny Desk effect and a new level of recognition (their daily streams haven’t dipped below 50,000 a day since the beginning of the year). The duo released an EP in February, Papota, which features four new songs, plus the recorded versions of their pared-down Tiny Desk performances. They also released a short film that recreates their Tiny Desk performance—this time in a Buenos Aires diner. One of the themes of the EP is the pair wrestling with the implications of their viral success. On the song Impostor, Ca7riel asks “¿Y ahora que vamos hacer?/El tiny desk me jodio” (What do we do now? Tiny Desk fucked me up.) It’s an overstatement, but an acknowledgment that the path they’re now on ran directly through the NPR offices. 
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Hell Is Us hands-on preview: ‘AAA games are so bloody bland’

    Hell Is Us – not a Ubisoft adventureGameCentral goes hands-on with an original sci-fi action adventure where the emphasis is on unguided exploration, with some throwback Zelda inspirations.
    You might already have heard the name Hell Is Us, as the game was first announced way back in April 2022. We previewed the sci-fi tinged adventure title, developed by Rogue Factor, for the first time last year but now it’s now on the home-straight, with a launch slated for September 4, and it’s shaping up to be a peculiar but intriguing mix of influences and ideas.
    Our original preview covered the opening portion of the game, so we’ll avoid recycling the same beats here. But for the general gist, you play as a United Nations peacekeeper named Rémi who absconds to the war-torn country of Hadea to track down his parents. A stroll through the tutorial woods later, however, and you realise this isn’t your average civil war. 
    If you’re a fan of Alex Garland’s Annihilation, the strange, faceless alien from the film’s conclusion seems to have been a major influence here. The Hollow Walkers, as they’re called, are very creepy, as they lurch towards you unpredictably, with morphing limbs which give way to vivid, crystallised attacks or, in some cases, attached entities you have to kill first. Their glossy white exteriors act as a stark contrast to the muted eastern European landscapes and dungeons you explore. 
    As a game, Hell Is Us is somewhere between Bloodborne and The Elder Scrolls. Combat wise, it’s pulling from the former, as you manage a stamina bar, study enemy patterns for the best moment to strike, and rely on aggressive play to replenish a magic gauge for special skills. You also have access to a drone which has various uses tied to cooldown meters, between distracting enemies for crowd control andmaking a charging lunge to dash across the field. 
    Rogue Factor has stressed Hell Is Us isn’t a Soulslike though. You’re not scrambling for bonfires or any equivalent, but exploring and chatting with characters to piece together where you need to go next, discovering new places of interest, and encountering side objectives which bleed into the overall experience of navigating each semi-open world area. The ethos behind Hell Is Us is discovery and the organic feeling of finding your feet through clues in the world, rather than using obvious quest markers. 
    This might bring to mind acclaimed games like Elden Ring and The Legend Of Zelda: Breath Of The Wild, in their attempt to declutter open world exploration, but the game’s director, Jonathan Jacques-Belletête, believes the roots of what Hell Is Us is aiming for goes much further back.

    A cosmic horror vibe‘Honestly, something like Zelda: A Link To The Past is much closer to what we’re doing now than a Breath Of The Wild,’ said Jacques-Belletête. ‘Sometimes people are like: ‘I really can’t put my finger on what kind of game it is, what is it?’ It’s just a bloody adventure game man. Look, you’ve got a combat system, you’ve got enemies, you’ve got a world to explore, there’s a mystery, you’re not exactly sure of this and that, there’s some secrets, there’s some dungeons, we did a game like that. It’s called an adventure game,’ he laughs. ‘There were even side-quests in A Link To The Past that didn’t tell you they were side-quests.’
    Hell Is Us might have roots in classic adventure games but Jacques-Belletête, is keen to highlight the fatigue around Ubisoft style open world bloat, where checklists and quest markers are traditionally used in abundance. With the success of Elden Ring, there’s a sense many players are craving a return to the hands-off approach, where you discover and navigate without guidance – something which Hell Is Us is hoping to capitalise on after being in development for five years. 
    ‘It’s so much of the same thing,’ he says, when talking about Ubisoft style open worlds. ‘It loses all meaning. Things within these open worlds lose a lot of their taste because too much is like not enough. Do you know what I mean? You have to fill up these spaces with stuff and they just become a bit bland. Like once you’ve seen one, you’ve seen all of them. 
    ‘It’s not Assassin’s Creed, it’s not that, it’s all these things. We’ve all played them. I’ve got hundreds of hours in Elder Scrolls, all the Elder Scrolls, and that’s not the point. It’s not that I don’t like them. It’s just trends do their time and then you have other ideas. It’s a pendulum as well. Games used to be a lot more hardcore that way, we’re trying to go back to that.’
    The crux of my time in Hell Is Us is spent in the Acasa Marshes, the second semi-open area where the game lets you off the leash. The swampy lands are crawling with Hollow Walkers in various forms, from hulking monstrosities to mage-like foes that hurl projectiles from clifftops. A swirling black vortex is a key focal point but it’s surrounded by enemies, while a settlement of villagers sits on a hill in the distance. 
    According to the developer, this area is one of the largest areas in the game, ‘if not the biggest one’, and it seems pretty expansive. We found ourselves heading towards the village, whose militaristic leader points you towards your main objective with only a vague mention of going ‘north east’. You have to dig out your compass to get a grasp on your position, as you try and navigate towards, and identify, the next location based on this information. 
    The lack of quest markers makes the experience more involving, as you have to pay more attention to your surroundings and what characters say, but I wasn’t entirely sold on the story or writing. It’s something which will hopefully become more engrossing as you get a better grasp of what’s going on, but I wish I was drawn to interact with the characters based on something beyond the need to progress. 
    When you are exploring aimlessly though, Hell Is Us offers some captivating chaos – even if some areas did appear to be gated off. We fought our way to the aforementioned swirling black vortex, encountering enemies beyond our skill level, only to find it was inaccessible due to not having a specific item. We later found an underground tunnel filled with enemies, where an individual connected to a side0quest was trapped at the other end. 

    Surprises lurk in the marshesAlong with these open areas, Hell Is Us also offers dungeons built around puzzles and combat encounters. Aside from the opening introduction, we were shown a later example in the Lymbic Forge, which offered a nice dose of visual variety, with flowery gardens surrounding the boggy marshes. We didn’t get a whole lot of time to explore, but it did highlight the breadth of the combat upgrades and customisation with late-game weapons. 
    Hell Is Us is a melting pot of influences, and while we’re not sold on everything it’s trying to accomplish, it’s certainly another AA game with big, exciting ambitions – a trend amplified this year by the success of Clair Obscur: Expedition 33. For the game’s director, who has a long history in the AAA space working at Eidos Montreal, the jump to AA, with a smaller team and less financial pressure, means you have a better chance of striking gold. 

    More Trending

    ‘Look at what’s happened to the industry over the past few years,’ Jacques-Belletête said. ‘Everything is crumbling. The big ones are crumbling. It’s unsustainable. And the games are so bloody bland, man. Everything is starting to taste the same. 
    ‘I find there’s nothing worse than starting a game and right away, in the first two minutes, you know how everything’s going to work. You know how every single mechanic is going to work. They might have a littlein how it’s going to feel, or this and that, the user interface will change a bit, but you’ve gone through the ropes a dozen times. 
    ‘A game has to occupy a space in your brain that your brain can’t really compute just yet. When you turn your console off and it stays there, that’s because something is going on. Your brain is processing. And I think that’s a lot easier to do in the AA space than the AAA.’
    Formats: Xbox Series X/S, PlayStation 5, and PCPrice: £49.99Publisher: NaconDeveloper: Rogue FactorRelease Date: 4th September 2025Age Rating: 16

    The combat itEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter.
    To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here.
    For more stories like this, check our Gaming page.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #hell #handson #preview #aaa #games
    Hell Is Us hands-on preview: ‘AAA games are so bloody bland’
    Hell Is Us – not a Ubisoft adventureGameCentral goes hands-on with an original sci-fi action adventure where the emphasis is on unguided exploration, with some throwback Zelda inspirations. You might already have heard the name Hell Is Us, as the game was first announced way back in April 2022. We previewed the sci-fi tinged adventure title, developed by Rogue Factor, for the first time last year but now it’s now on the home-straight, with a launch slated for September 4, and it’s shaping up to be a peculiar but intriguing mix of influences and ideas. Our original preview covered the opening portion of the game, so we’ll avoid recycling the same beats here. But for the general gist, you play as a United Nations peacekeeper named Rémi who absconds to the war-torn country of Hadea to track down his parents. A stroll through the tutorial woods later, however, and you realise this isn’t your average civil war.  If you’re a fan of Alex Garland’s Annihilation, the strange, faceless alien from the film’s conclusion seems to have been a major influence here. The Hollow Walkers, as they’re called, are very creepy, as they lurch towards you unpredictably, with morphing limbs which give way to vivid, crystallised attacks or, in some cases, attached entities you have to kill first. Their glossy white exteriors act as a stark contrast to the muted eastern European landscapes and dungeons you explore.  As a game, Hell Is Us is somewhere between Bloodborne and The Elder Scrolls. Combat wise, it’s pulling from the former, as you manage a stamina bar, study enemy patterns for the best moment to strike, and rely on aggressive play to replenish a magic gauge for special skills. You also have access to a drone which has various uses tied to cooldown meters, between distracting enemies for crowd control andmaking a charging lunge to dash across the field.  Rogue Factor has stressed Hell Is Us isn’t a Soulslike though. You’re not scrambling for bonfires or any equivalent, but exploring and chatting with characters to piece together where you need to go next, discovering new places of interest, and encountering side objectives which bleed into the overall experience of navigating each semi-open world area. The ethos behind Hell Is Us is discovery and the organic feeling of finding your feet through clues in the world, rather than using obvious quest markers.  This might bring to mind acclaimed games like Elden Ring and The Legend Of Zelda: Breath Of The Wild, in their attempt to declutter open world exploration, but the game’s director, Jonathan Jacques-Belletête, believes the roots of what Hell Is Us is aiming for goes much further back. A cosmic horror vibe‘Honestly, something like Zelda: A Link To The Past is much closer to what we’re doing now than a Breath Of The Wild,’ said Jacques-Belletête. ‘Sometimes people are like: ‘I really can’t put my finger on what kind of game it is, what is it?’ It’s just a bloody adventure game man. Look, you’ve got a combat system, you’ve got enemies, you’ve got a world to explore, there’s a mystery, you’re not exactly sure of this and that, there’s some secrets, there’s some dungeons, we did a game like that. It’s called an adventure game,’ he laughs. ‘There were even side-quests in A Link To The Past that didn’t tell you they were side-quests.’ Hell Is Us might have roots in classic adventure games but Jacques-Belletête, is keen to highlight the fatigue around Ubisoft style open world bloat, where checklists and quest markers are traditionally used in abundance. With the success of Elden Ring, there’s a sense many players are craving a return to the hands-off approach, where you discover and navigate without guidance – something which Hell Is Us is hoping to capitalise on after being in development for five years.  ‘It’s so much of the same thing,’ he says, when talking about Ubisoft style open worlds. ‘It loses all meaning. Things within these open worlds lose a lot of their taste because too much is like not enough. Do you know what I mean? You have to fill up these spaces with stuff and they just become a bit bland. Like once you’ve seen one, you’ve seen all of them.  ‘It’s not Assassin’s Creed, it’s not that, it’s all these things. We’ve all played them. I’ve got hundreds of hours in Elder Scrolls, all the Elder Scrolls, and that’s not the point. It’s not that I don’t like them. It’s just trends do their time and then you have other ideas. It’s a pendulum as well. Games used to be a lot more hardcore that way, we’re trying to go back to that.’ The crux of my time in Hell Is Us is spent in the Acasa Marshes, the second semi-open area where the game lets you off the leash. The swampy lands are crawling with Hollow Walkers in various forms, from hulking monstrosities to mage-like foes that hurl projectiles from clifftops. A swirling black vortex is a key focal point but it’s surrounded by enemies, while a settlement of villagers sits on a hill in the distance.  According to the developer, this area is one of the largest areas in the game, ‘if not the biggest one’, and it seems pretty expansive. We found ourselves heading towards the village, whose militaristic leader points you towards your main objective with only a vague mention of going ‘north east’. You have to dig out your compass to get a grasp on your position, as you try and navigate towards, and identify, the next location based on this information.  The lack of quest markers makes the experience more involving, as you have to pay more attention to your surroundings and what characters say, but I wasn’t entirely sold on the story or writing. It’s something which will hopefully become more engrossing as you get a better grasp of what’s going on, but I wish I was drawn to interact with the characters based on something beyond the need to progress.  When you are exploring aimlessly though, Hell Is Us offers some captivating chaos – even if some areas did appear to be gated off. We fought our way to the aforementioned swirling black vortex, encountering enemies beyond our skill level, only to find it was inaccessible due to not having a specific item. We later found an underground tunnel filled with enemies, where an individual connected to a side0quest was trapped at the other end.  Surprises lurk in the marshesAlong with these open areas, Hell Is Us also offers dungeons built around puzzles and combat encounters. Aside from the opening introduction, we were shown a later example in the Lymbic Forge, which offered a nice dose of visual variety, with flowery gardens surrounding the boggy marshes. We didn’t get a whole lot of time to explore, but it did highlight the breadth of the combat upgrades and customisation with late-game weapons.  Hell Is Us is a melting pot of influences, and while we’re not sold on everything it’s trying to accomplish, it’s certainly another AA game with big, exciting ambitions – a trend amplified this year by the success of Clair Obscur: Expedition 33. For the game’s director, who has a long history in the AAA space working at Eidos Montreal, the jump to AA, with a smaller team and less financial pressure, means you have a better chance of striking gold.  More Trending ‘Look at what’s happened to the industry over the past few years,’ Jacques-Belletête said. ‘Everything is crumbling. The big ones are crumbling. It’s unsustainable. And the games are so bloody bland, man. Everything is starting to taste the same.  ‘I find there’s nothing worse than starting a game and right away, in the first two minutes, you know how everything’s going to work. You know how every single mechanic is going to work. They might have a littlein how it’s going to feel, or this and that, the user interface will change a bit, but you’ve gone through the ropes a dozen times.  ‘A game has to occupy a space in your brain that your brain can’t really compute just yet. When you turn your console off and it stays there, that’s because something is going on. Your brain is processing. And I think that’s a lot easier to do in the AA space than the AAA.’ Formats: Xbox Series X/S, PlayStation 5, and PCPrice: £49.99Publisher: NaconDeveloper: Rogue FactorRelease Date: 4th September 2025Age Rating: 16 The combat itEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #hell #handson #preview #aaa #games
    Hell Is Us hands-on preview: ‘AAA games are so bloody bland’
    metro.co.uk
    Hell Is Us – not a Ubisoft adventure (Nacon) GameCentral goes hands-on with an original sci-fi action adventure where the emphasis is on unguided exploration, with some throwback Zelda inspirations. You might already have heard the name Hell Is Us, as the game was first announced way back in April 2022. We previewed the sci-fi tinged adventure title, developed by Rogue Factor, for the first time last year but now it’s now on the home-straight, with a launch slated for September 4, and it’s shaping up to be a peculiar but intriguing mix of influences and ideas. Our original preview covered the opening portion of the game, so we’ll avoid recycling the same beats here. But for the general gist, you play as a United Nations peacekeeper named Rémi who absconds to the war-torn country of Hadea to track down his parents. A stroll through the tutorial woods later, however, and you realise this isn’t your average civil war.  If you’re a fan of Alex Garland’s Annihilation, the strange, faceless alien from the film’s conclusion seems to have been a major influence here. The Hollow Walkers, as they’re called, are very creepy, as they lurch towards you unpredictably, with morphing limbs which give way to vivid, crystallised attacks or, in some cases, attached entities you have to kill first. Their glossy white exteriors act as a stark contrast to the muted eastern European landscapes and dungeons you explore.  As a game, Hell Is Us is somewhere between Bloodborne and The Elder Scrolls. Combat wise, it’s pulling from the former, as you manage a stamina bar, study enemy patterns for the best moment to strike, and rely on aggressive play to replenish a magic gauge for special skills. You also have access to a drone which has various uses tied to cooldown meters, between distracting enemies for crowd control andmaking a charging lunge to dash across the field.  Rogue Factor has stressed Hell Is Us isn’t a Soulslike though. You’re not scrambling for bonfires or any equivalent, but exploring and chatting with characters to piece together where you need to go next, discovering new places of interest, and encountering side objectives which bleed into the overall experience of navigating each semi-open world area. The ethos behind Hell Is Us is discovery and the organic feeling of finding your feet through clues in the world, rather than using obvious quest markers.  This might bring to mind acclaimed games like Elden Ring and The Legend Of Zelda: Breath Of The Wild, in their attempt to declutter open world exploration, but the game’s director, Jonathan Jacques-Belletête, believes the roots of what Hell Is Us is aiming for goes much further back. A cosmic horror vibe (Nacon) ‘Honestly, something like Zelda: A Link To The Past is much closer to what we’re doing now than a Breath Of The Wild,’ said Jacques-Belletête. ‘Sometimes people are like: ‘I really can’t put my finger on what kind of game it is, what is it?’ It’s just a bloody adventure game man. Look, you’ve got a combat system, you’ve got enemies, you’ve got a world to explore, there’s a mystery, you’re not exactly sure of this and that, there’s some secrets, there’s some dungeons, we did a game like that. It’s called an adventure game,’ he laughs. ‘There were even side-quests in A Link To The Past that didn’t tell you they were side-quests.’ Hell Is Us might have roots in classic adventure games but Jacques-Belletête, is keen to highlight the fatigue around Ubisoft style open world bloat, where checklists and quest markers are traditionally used in abundance. With the success of Elden Ring, there’s a sense many players are craving a return to the hands-off approach, where you discover and navigate without guidance – something which Hell Is Us is hoping to capitalise on after being in development for five years.  ‘It’s so much of the same thing,’ he says, when talking about Ubisoft style open worlds. ‘It loses all meaning. Things within these open worlds lose a lot of their taste because too much is like not enough. Do you know what I mean? You have to fill up these spaces with stuff and they just become a bit bland. Like once you’ve seen one, you’ve seen all of them.  ‘It’s not Assassin’s Creed, it’s not that, it’s all these things. We’ve all played them. I’ve got hundreds of hours in Elder Scrolls, all the Elder Scrolls, and that’s not the point. It’s not that I don’t like them. It’s just trends do their time and then you have other ideas. It’s a pendulum as well. Games used to be a lot more hardcore that way, we’re trying to go back to that.’ The crux of my time in Hell Is Us is spent in the Acasa Marshes, the second semi-open area where the game lets you off the leash. The swampy lands are crawling with Hollow Walkers in various forms, from hulking monstrosities to mage-like foes that hurl projectiles from clifftops. A swirling black vortex is a key focal point but it’s surrounded by enemies, while a settlement of villagers sits on a hill in the distance.  According to the developer, this area is one of the largest areas in the game, ‘if not the biggest one’, and it seems pretty expansive. We found ourselves heading towards the village, whose militaristic leader points you towards your main objective with only a vague mention of going ‘north east’. You have to dig out your compass to get a grasp on your position, as you try and navigate towards, and identify, the next location based on this information.  The lack of quest markers makes the experience more involving, as you have to pay more attention to your surroundings and what characters say, but I wasn’t entirely sold on the story or writing. It’s something which will hopefully become more engrossing as you get a better grasp of what’s going on, but I wish I was drawn to interact with the characters based on something beyond the need to progress.  When you are exploring aimlessly though, Hell Is Us offers some captivating chaos – even if some areas did appear to be gated off. We fought our way to the aforementioned swirling black vortex, encountering enemies beyond our skill level, only to find it was inaccessible due to not having a specific item. We later found an underground tunnel filled with enemies, where an individual connected to a side0quest was trapped at the other end.  Surprises lurk in the marshes (Nacon) Along with these open areas, Hell Is Us also offers dungeons built around puzzles and combat encounters. Aside from the opening introduction, we were shown a later example in the Lymbic Forge, which offered a nice dose of visual variety, with flowery gardens surrounding the boggy marshes. We didn’t get a whole lot of time to explore, but it did highlight the breadth of the combat upgrades and customisation with late-game weapons.  Hell Is Us is a melting pot of influences, and while we’re not sold on everything it’s trying to accomplish, it’s certainly another AA game with big, exciting ambitions – a trend amplified this year by the success of Clair Obscur: Expedition 33. For the game’s director, who has a long history in the AAA space working at Eidos Montreal, the jump to AA, with a smaller team and less financial pressure, means you have a better chance of striking gold.  More Trending ‘Look at what’s happened to the industry over the past few years,’ Jacques-Belletête said. ‘Everything is crumbling. The big ones are crumbling. It’s unsustainable. And the games are so bloody bland, man. Everything is starting to taste the same.  ‘I find there’s nothing worse than starting a game and right away, in the first two minutes, you know how everything’s going to work. You know how every single mechanic is going to work. They might have a little [extra] in how it’s going to feel, or this and that, the user interface will change a bit, but you’ve gone through the ropes a dozen times.  ‘A game has to occupy a space in your brain that your brain can’t really compute just yet. When you turn your console off and it stays there, that’s because something is going on. Your brain is processing. And I think that’s a lot easier to do in the AA space than the AAA.’ Formats: Xbox Series X/S, PlayStation 5, and PCPrice: £49.99Publisher: NaconDeveloper: Rogue FactorRelease Date: 4th September 2025Age Rating: 16 The combat it(Nacon) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace.
    Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features. 
    On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI?
    Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era.
    Source: Google I/O 20025
    Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company? 
    It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it.
    Google’s grand design: the ‘world model’ and universal assistant
    The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” 
    This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems.
    Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.” 
    This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.”
    CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp.
    The strategic stakes: defending search, courting developers amid an AI arms race
    This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said.
    Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.”
    But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web.
    Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves. 
    At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework.
    OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability.
    Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs.
    Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default.
    For enterprise decision-makers: navigating Google’s ‘world model’ future
    Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations:

    Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default.
    Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation.
    Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery.
    Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities.
    Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility.

    These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged.
    Google’s defining offensive: shaping the future or strategic overreach?
    Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense.
    The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors?
    The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #googles #worldmodel #bet #building #operating
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #googles #worldmodel #bet #building #operating
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    venturebeat.com
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Mode (rolling out in the U.S.) and AI Overviews (already serving 1.5 billion users monthly) are the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its $200 billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence (AGI). While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage. (While other AI leaders, including Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and xAI’s Elon Musk have all discussed ‘world models,” Google uniquely and most comprehensively ties this foundational concept to its near-term strategic thrust: the ‘universal AI assistant.) Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands (e.g., thermodynamics explained via cycling. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.”  The new developer tools unveiled at I/O are building blocks. Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the $200 billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos (Microsoft Build Keynote, Miti Joshi at 22:52, Kadesha Kerr at 51:26). Nadella’s “open agentic web” vision (NLWeb, MCP) offers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported $6.5 billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocols (like MCP) and easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities (like Veo 3 and Imagen 4 showcased by Woodward at I/O), and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game (and its risks): Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Robots square off in world’s first humanoid boxing match

    The humanoid robots are fighting.
     
    Image: Unitree

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    After decades of being tortured, shoved, kicked, burned, and bludgeoned, robots are finally getting their chance to fight back. Sort of. 
    This weekend, Chinese robotics maker Unitree says it will livestream the world’s first boxing match between two of its humanoid robots. The event, titled Unitree Iron Fist King: Awakening, will feature a face-off between two of Unitree’s 4.3-foot-tall G1 robots. The robots will reportedly be remotely controlled by human engineers, though they are also expected to demonstrate some autonomous, pre-programmed actions as well. Earlier this week, the two robots previewed some of their moves at an elementary school in Hangzhou, China.
    Video released by Unitree earlier this month shows the robots, boxing gloves strapped on, “training” with their human coaches. The petite robots throw a few hooks with their arms before being pushed to the ground. One quickly gets back up and, after briefly struggling to face the right direction, spins around and delivers a straight kick, 300-style. Unitree claims its robots use a motion-capture training system that helps them learn from past mistakes and improve over time.

    The training video also shows the two robots briefly sparring with each other. The clacking sound of steel fills the room as they exchange a flurry of punches. At one point, both simultaneously deliver knee kicks to each other’s groin area, sending the robot in blue gear tumbling to the ground.
    “The robot is actively learning even more here skills,” the company notes in a caption towards the end of the video. 
    Humans have a long history of forcing robots to fight 
    The human tendency to force robots to fight for our amusement isn’t entirely new. The show Battle Bots, which dates back to the late 1990s revolved around engineers creating and designing remote-controlled robots, often armed to the teeth with electric saws and flamethrowers, and forcing them to duke it out. Many, many robots were reduced to scrap metal over the show’s 12 seasons. 

    Since then, engineers around the world have been experimenting with new ways to teach bipedal, humanoid robots how to throw punches and land kicks without stumbling or falling. Sometimes these machines are remotely controlled by human operators. In other cases, semi-autonomous robots have learned to “mirror” physical movements observed in humans. More advanced autonomous robots, like those being developed by Boston Dynamics and Figure, can move around their environment and perform pre-programmed actions. Neither of those companies, it’s worth noting, have announced any plans to make their robots fight. 
    China is quickly becoming a center stage for public displays of humanoid robot athletic competition. Last month, more than 20 robotics companies entered their robots into a half-marathon race in Beijing, where they competed against each other and human runners. The results were underwhelming. Media reports from the event claimed many of the machines failed to make it past the starting line. Others veered off course, with one reportedly even crashing into a barrier. The first robot to cross the finish line—a machine designed by the Beijing Humanoid Robot Innovation Center—did so nearly an hour and forty minutes after the first human completed the race. Only six robots finished.
    #robots #square #off #worlds #first
    Robots square off in world’s first humanoid boxing match
    The humanoid robots are fighting.   Image: Unitree Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. After decades of being tortured, shoved, kicked, burned, and bludgeoned, robots are finally getting their chance to fight back. Sort of.  This weekend, Chinese robotics maker Unitree says it will livestream the world’s first boxing match between two of its humanoid robots. The event, titled Unitree Iron Fist King: Awakening, will feature a face-off between two of Unitree’s 4.3-foot-tall G1 robots. The robots will reportedly be remotely controlled by human engineers, though they are also expected to demonstrate some autonomous, pre-programmed actions as well. Earlier this week, the two robots previewed some of their moves at an elementary school in Hangzhou, China. Video released by Unitree earlier this month shows the robots, boxing gloves strapped on, “training” with their human coaches. The petite robots throw a few hooks with their arms before being pushed to the ground. One quickly gets back up and, after briefly struggling to face the right direction, spins around and delivers a straight kick, 300-style. Unitree claims its robots use a motion-capture training system that helps them learn from past mistakes and improve over time. The training video also shows the two robots briefly sparring with each other. The clacking sound of steel fills the room as they exchange a flurry of punches. At one point, both simultaneously deliver knee kicks to each other’s groin area, sending the robot in blue gear tumbling to the ground. “The robot is actively learning even more here skills,” the company notes in a caption towards the end of the video.  Humans have a long history of forcing robots to fight  The human tendency to force robots to fight for our amusement isn’t entirely new. The show Battle Bots, which dates back to the late 1990s revolved around engineers creating and designing remote-controlled robots, often armed to the teeth with electric saws and flamethrowers, and forcing them to duke it out. Many, many robots were reduced to scrap metal over the show’s 12 seasons.  Since then, engineers around the world have been experimenting with new ways to teach bipedal, humanoid robots how to throw punches and land kicks without stumbling or falling. Sometimes these machines are remotely controlled by human operators. In other cases, semi-autonomous robots have learned to “mirror” physical movements observed in humans. More advanced autonomous robots, like those being developed by Boston Dynamics and Figure, can move around their environment and perform pre-programmed actions. Neither of those companies, it’s worth noting, have announced any plans to make their robots fight.  China is quickly becoming a center stage for public displays of humanoid robot athletic competition. Last month, more than 20 robotics companies entered their robots into a half-marathon race in Beijing, where they competed against each other and human runners. The results were underwhelming. Media reports from the event claimed many of the machines failed to make it past the starting line. Others veered off course, with one reportedly even crashing into a barrier. The first robot to cross the finish line—a machine designed by the Beijing Humanoid Robot Innovation Center—did so nearly an hour and forty minutes after the first human completed the race. Only six robots finished. #robots #square #off #worlds #first
    Robots square off in world’s first humanoid boxing match
    www.popsci.com
    The humanoid robots are fighting.   Image: Unitree Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. After decades of being tortured, shoved, kicked, burned, and bludgeoned, robots are finally getting their chance to fight back. Sort of.  This weekend, Chinese robotics maker Unitree says it will livestream the world’s first boxing match between two of its humanoid robots. The event, titled Unitree Iron Fist King: Awakening, will feature a face-off between two of Unitree’s 4.3-foot-tall G1 robots. The robots will reportedly be remotely controlled by human engineers, though they are also expected to demonstrate some autonomous, pre-programmed actions as well. Earlier this week, the two robots previewed some of their moves at an elementary school in Hangzhou, China. Video released by Unitree earlier this month shows the robots, boxing gloves strapped on, “training” with their human coaches. The petite robots throw a few hooks with their arms before being pushed to the ground. One quickly gets back up and, after briefly struggling to face the right direction, spins around and delivers a straight kick, 300-style. Unitree claims its robots use a motion-capture training system that helps them learn from past mistakes and improve over time. The training video also shows the two robots briefly sparring with each other. The clacking sound of steel fills the room as they exchange a flurry of punches. At one point, both simultaneously deliver knee kicks to each other’s groin area, sending the robot in blue gear tumbling to the ground. “The robot is actively learning even more here skills,” the company notes in a caption towards the end of the video.  Humans have a long history of forcing robots to fight  The human tendency to force robots to fight for our amusement isn’t entirely new. The show Battle Bots, which dates back to the late 1990s revolved around engineers creating and designing remote-controlled robots, often armed to the teeth with electric saws and flamethrowers, and forcing them to duke it out. Many, many robots were reduced to scrap metal over the show’s 12 seasons.  Since then, engineers around the world have been experimenting with new ways to teach bipedal, humanoid robots how to throw punches and land kicks without stumbling or falling. Sometimes these machines are remotely controlled by human operators. In other cases, semi-autonomous robots have learned to “mirror” physical movements observed in humans. More advanced autonomous robots, like those being developed by Boston Dynamics and Figure, can move around their environment and perform pre-programmed actions. Neither of those companies, it’s worth noting, have announced any plans to make their robots fight.  China is quickly becoming a center stage for public displays of humanoid robot athletic competition. Last month, more than 20 robotics companies entered their robots into a half-marathon race in Beijing, where they competed against each other and human runners. The results were underwhelming. Media reports from the event claimed many of the machines failed to make it past the starting line. Others veered off course, with one reportedly even crashing into a barrier. The first robot to cross the finish line—a machine designed by the Beijing Humanoid Robot Innovation Center—did so nearly an hour and forty minutes after the first human completed the race. Only six robots finished.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • ChatGPT: Everything you need to know about the AI-powered chatbot

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.
    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.
    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.
    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.
    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.
    To see a list of 2024 updates, go here.
    Timeline of the most recent ChatGPT updates

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    May 2025
    OpenAI CFO says hardware will drive ChatGPT’s growth
    OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.
    OpenAI’s ChatGPT unveils its AI coding agent, Codex
    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.
    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life
    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.
    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT
    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.
    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.
    OpenAI launches a new data residency program in Asia
    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.
    OpenAI to introduce a program to grow AI infrastructure
    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.
    OpenAI promises to make changes to prevent future ChatGPT sycophancy
    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.
    April 2025
    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable
    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.
    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations
    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”
    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.
    OpenAI wants its AI model to access cloud models for assistance
    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.
    OpenAI aims to make its new “open” AI model the best on the market
    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.
    OpenAI’s GPT-4.1 may be less aligned than earlier models
    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
    OpenAI’s o3 AI model scored lower than expected on a benchmark
    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.
    OpenAI unveils Flex processing for cheaper, slower AI tasks
    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.
    OpenAI’s latest AI models now have a safeguard against biorisks
    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.
    OpenAI launches its latest reasoning models, o3 and o4-mini
    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.
    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers
    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.
    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI
    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.
    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.
    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July
    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.
    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities
    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.
    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April
    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.
    OpenAI could release GPT-4.1 soon
    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.
    OpenAI has updated ChatGPT to use information from your previous conversations
    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.
    OpenAI is working on watermarks for images made with ChatGPT
    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”
    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students
    OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.
    ChatGPT users have generated over 700M images so far
    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.
    OpenAI’s o3 model could cost more to run than initial estimate
    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task.
    OpenAI CEO says capacity issues will cause product delays
    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.
    March 2025
    OpenAI plans to release a new ‘open’ AI language model
    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.
    OpenAI removes ChatGPT’s restrictions on image generation
    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.
    OpenAI adopts Anthropic’s standard for linking AI models with data
    OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.
    OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns
    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.
    OpenAI expects revenue to triple to billion this year
    OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said.
    ChatGPT has upgraded its image-generation feature
    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.
    OpenAI announces leadership updates
    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.
    OpenAI’s AI voice assistant now has advanced feature
    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.
    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.
    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations
    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
    OpenAI upgrades its transcription and voice-generating AI models
    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.
    OpenAI has launched o1-pro, a more powerful version of its o1
    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.
    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.
    OpenAI says it has trained an AI that’s “really good” at creative writing
    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all.
    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.
    OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’
    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.
    ChatGPT can directly edit your code
    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.
    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases
    According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.
    February 2025
    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release
    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 
    ChatGPT may not be as power-hungry as once assumed
    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.
    OpenAI now reveals more of its o3-mini model’s thought process
    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.
    You can now use ChatGPT web search without logging in
    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.
    OpenAI unveils a new ChatGPT agent for ‘deep research’
    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.
    January 2025
    OpenAI used a subreddit to test AI persuasion
    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 
    OpenAI launches o3-mini, its latest ‘reasoning’ model
    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”
    ChatGPT’s mobile users are 85% male, report says
    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.
    OpenAI launches ChatGPT plan for US government agencies
    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.
    More teens report using ChatGPT for schoolwork, despite the tech’s faults
    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.
    OpenAI says it may store deleted Operator data for up to 90 days
    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.
    OpenAI launches Operator, an AI agent that performs tasks autonomously
    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.
    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
    OpenAI tests phone number-only ChatGPT signups
    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.
    ChatGPT now lets you schedule reminders and recurring tasks
    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.
    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’
    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.
    FAQs:
    What is ChatGPT? How does it work?
    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.
    When did ChatGPT get released?
    November 30, 2022 is when ChatGPT was released for public use.
    What is the latest version of ChatGPT?
    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.
    Can I use ChatGPT for free?
    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.
    Who uses ChatGPT?
    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.
    What companies use ChatGPT?
    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.
    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.
    What does GPT mean in ChatGPT?
    GPT stands for Generative Pre-Trained Transformer.
    What is the difference between ChatGPT and a chatbot?
    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.
    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
    Can ChatGPT write essays?
    Yes.
    Can ChatGPT commit libel?
    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.
    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.
    Does ChatGPT have an app?
    Yes, there is a free ChatGPT mobile app for iOS and Android users.
    What is the ChatGPT character limit?
    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.
    Does ChatGPT have an API?
    Yes, it was released March 1, 2023.
    What are some sample everyday uses for ChatGPT?
    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.
    What are some advanced uses for ChatGPT?
    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.
    How good is ChatGPT at writing code?
    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.
    Can you save a ChatGPT chat?
    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.
    Are there alternatives to ChatGPT?
    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.
    How does ChatGPT handle data privacy?
    OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.
    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.
    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”
    What controversies have surrounded ChatGPT?
    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm.
    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.
    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.
    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.
    There have also been cases of ChatGPT accusing individuals of false crimes.
    Where can I find examples of ChatGPT prompts?
    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.
    Can ChatGPT be detected?
    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.
    Are ChatGPT chats public?
    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.
    What lawsuits are there surrounding ChatGPT?
    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.
    Are there issues regarding plagiarism with ChatGPT?
    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    #chatgpt #everything #you #need #know
    ChatGPT: Everything you need to know about the AI-powered chatbot
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to billion this year OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data. #chatgpt #everything #you #need #know
    ChatGPT: Everything you need to know about the AI-powered chatbot
    techcrunch.com
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to $12.7 billion this year OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • After Google IO’s big AI reveals, my iPhone has never felt dumber

    Macworld

    I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever.

    Actions speak louder than words

    The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’tdisplay nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either.

    If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors.

    Google’s demos at IO showed the true power of AI.Foundry

    Apple’s tight grip will only suffocate innovation

    A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web.

    Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps.

    Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry

    Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness, but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users.

    Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones.

    Beyond the basics

    Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time. 

    For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals. 

    Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure.

    Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry

    The wrong path

    During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more. 

    While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way.

    Apple is running out of time

    iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases. 

    Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry

    Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything.

    The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts. 

    Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory.
    #after #google #ios #big #reveals
    After Google IO’s big AI reveals, my iPhone has never felt dumber
    Macworld I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever. Actions speak louder than words The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’tdisplay nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either. If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors. Google’s demos at IO showed the true power of AI.Foundry Apple’s tight grip will only suffocate innovation A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web. Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps. Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness, but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users. Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones. Beyond the basics Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time.  For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals.  Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure. Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry The wrong path During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more.  While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way. Apple is running out of time iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases.  Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything. The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts.  Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory. #after #google #ios #big #reveals
    After Google IO’s big AI reveals, my iPhone has never felt dumber
    www.macworld.com
    Macworld I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever. Actions speak louder than words The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’t (primarily, at least) display nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either. If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors. Google’s demos at IO showed the true power of AI.Foundry Apple’s tight grip will only suffocate innovation A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web. Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps. Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness (that Apple is reportedly rebuilding from scratch), but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users. Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones. Beyond the basics Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time.  For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals.  Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure. Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry The wrong path During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more.  While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way. Apple is running out of time iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases.  Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything. The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts.  Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory.
    0 Комментарии ·0 Поделились ·0 предпросмотр
  • Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription

    Google I/O? More like Google AI

    Gemini 2.5 is leaving preview just in time for Google’s new AI subscription

    Gemini 2.5 is rolling out everywhere, and you can pay Google per month for more of it.

    Ryan Whitwam



    May 20, 2025 5:03 pm

    |

    44

    All the new Gemini AI at I/O.

    Credit:

    Ryan Whitwam

    All the new Gemini AI at I/O.

    Credit:

    Ryan Whitwam

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    MOUNTAIN VIEW, Calif.—Google rolled out early versions of Gemini 2.5 earlier this year. Marking a significant improvement over the 2.0 branch. For the first time, Google's chatbot felt competitive with the likes of ChatGPT, but it's been "experimental" and later "preview" since then. At I/O 2025, Google announced general availability for Gemini 2.5, and these models will soon be integrated with Chrome. There's also a fancy new subscription plan to get the most from Google's AI. You probably won't like the pricing, though.
    Gemini 2.5 goes gold
    Even though Gemini 2.5 was revealed a few months ago, the older 2.0 Flash has been the default model all this time. Now that 2.5 is finally ready, the 2.5 Flash model will be swapped in as the new default. This model has built-in simulated reasoning, so its outputs are much more reliable than 2.0 Flash.
    Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
    Likewise, the Pro model is shedding its preview title, and it's getting some new goodies to celebrate. Recent updates have solidified the model's lead on the LM Arena leaderboard, which still means something to Google despite the recent drama—yes, AI benchmarking drama is a thing now. It's also getting a capability called Deep Think, which lets the model consider multiple hypotheses for every query. This apparently makes it incredibly good at math and coding. Google plans to do a little more testing on this feature before making it widely available.

    Deep Think is more capable of complex math and coding.

    Credit:

    Ryan Whitwam

    Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the "thinking" process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google's dev products, including Gemini Code Assist.
    Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it's coming to all Android and iOS devices immediately. Google demoed a future "agentic" capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It's perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn't as good, but as a glimpse of the future, it was impressive.
    There are also some developments in Chrome, and you guessed it, it's getting Gemini. It's not dissimilar from what you get in Edge with Copilot. There's a little Gemini icon in the corner of the browser, which you can click to access Google's chatbot. You can ask it about the pages you're browsing, have it summarize those pages, and ask follow-up questions.
    Google AI Ultra is ultra-expensive
    Since launching Gemini, Google has only had a single monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google's upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google's new Google AI Ultra plan will cost per month, more than the plan for ChatGPT Pro.

    So what does your get you every month? You'll get all the models included with the basic plan with much higher usage limits. If you're using video and image generation, for instance, you won't bump against any limits. Plus, Ultra comes with the newest and most expensive models. For example, Ultra subs will get immediate access to Gemini in Chrome, as well as a new agentic model capable of computer use in the Gemini API.

    Gemini Ultra has everything from Pro, plus higher limits and instant access to new tools.

    Credit:

    Ryan Whitwam

    That's probably still not worth it for most Gemini users, but Google is offering a deal right now. Ultra subscribers will get a 50 percent discount for the first three months, but is still a tough sell for AI. It's available in the US today and will come to other regions soon.
    A faster future?
    Google previewed what could be an important advancement in generative AI for the future. Most of the text and code-based outputs you've seen are generated from beginning to end, token by token. Its large language modelDiffusion works a bit differently for image generation, but Google is now experimenting with Gemini Diffusion.
    Diffusion models create images by starting with random noise and then denoise it to create what you asked for. Gemini Diffusion works similarly, generating entire blocks of tokens at the same time. The model can therefore work much faster, and it can check its work as it goes to make the final output more accurate than comparable LLMs. Google says Gemini Diffusion is 2.5 times faster than Gemini 2.5 Flash Lite, which is its fastest standard model, while also producing much better results.
    Google claims Gemini Diffusion is capable of previously unheard-of accuracy in complex math and coding. However, it's not being released right away like many of the other I/O Gemini features. Google DeepMind is accepting applications to test it, but it may be a while before the model exits the experimental stage.
    Even though I/O was wall-to-wall Gemini, Google still has much, much more AI in store.

    Ryan Whitwam
    Senior Technology Reporter

    Ryan Whitwam
    Senior Technology Reporter

    Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

    44 Comments
    #gemini #leaving #preview #just #time
    Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription
    Google I/O? More like Google AI Gemini 2.5 is leaving preview just in time for Google’s new AI subscription Gemini 2.5 is rolling out everywhere, and you can pay Google per month for more of it. Ryan Whitwam – May 20, 2025 5:03 pm | 44 All the new Gemini AI at I/O. Credit: Ryan Whitwam All the new Gemini AI at I/O. Credit: Ryan Whitwam Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more MOUNTAIN VIEW, Calif.—Google rolled out early versions of Gemini 2.5 earlier this year. Marking a significant improvement over the 2.0 branch. For the first time, Google's chatbot felt competitive with the likes of ChatGPT, but it's been "experimental" and later "preview" since then. At I/O 2025, Google announced general availability for Gemini 2.5, and these models will soon be integrated with Chrome. There's also a fancy new subscription plan to get the most from Google's AI. You probably won't like the pricing, though. Gemini 2.5 goes gold Even though Gemini 2.5 was revealed a few months ago, the older 2.0 Flash has been the default model all this time. Now that 2.5 is finally ready, the 2.5 Flash model will be swapped in as the new default. This model has built-in simulated reasoning, so its outputs are much more reliable than 2.0 Flash. Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June. Likewise, the Pro model is shedding its preview title, and it's getting some new goodies to celebrate. Recent updates have solidified the model's lead on the LM Arena leaderboard, which still means something to Google despite the recent drama—yes, AI benchmarking drama is a thing now. It's also getting a capability called Deep Think, which lets the model consider multiple hypotheses for every query. This apparently makes it incredibly good at math and coding. Google plans to do a little more testing on this feature before making it widely available. Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the "thinking" process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google's dev products, including Gemini Code Assist. Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it's coming to all Android and iOS devices immediately. Google demoed a future "agentic" capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It's perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn't as good, but as a glimpse of the future, it was impressive. There are also some developments in Chrome, and you guessed it, it's getting Gemini. It's not dissimilar from what you get in Edge with Copilot. There's a little Gemini icon in the corner of the browser, which you can click to access Google's chatbot. You can ask it about the pages you're browsing, have it summarize those pages, and ask follow-up questions. Google AI Ultra is ultra-expensive Since launching Gemini, Google has only had a single monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google's upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google's new Google AI Ultra plan will cost per month, more than the plan for ChatGPT Pro. So what does your get you every month? You'll get all the models included with the basic plan with much higher usage limits. If you're using video and image generation, for instance, you won't bump against any limits. Plus, Ultra comes with the newest and most expensive models. For example, Ultra subs will get immediate access to Gemini in Chrome, as well as a new agentic model capable of computer use in the Gemini API. Gemini Ultra has everything from Pro, plus higher limits and instant access to new tools. Credit: Ryan Whitwam That's probably still not worth it for most Gemini users, but Google is offering a deal right now. Ultra subscribers will get a 50 percent discount for the first three months, but is still a tough sell for AI. It's available in the US today and will come to other regions soon. A faster future? Google previewed what could be an important advancement in generative AI for the future. Most of the text and code-based outputs you've seen are generated from beginning to end, token by token. Its large language modelDiffusion works a bit differently for image generation, but Google is now experimenting with Gemini Diffusion. Diffusion models create images by starting with random noise and then denoise it to create what you asked for. Gemini Diffusion works similarly, generating entire blocks of tokens at the same time. The model can therefore work much faster, and it can check its work as it goes to make the final output more accurate than comparable LLMs. Google says Gemini Diffusion is 2.5 times faster than Gemini 2.5 Flash Lite, which is its fastest standard model, while also producing much better results. Google claims Gemini Diffusion is capable of previously unheard-of accuracy in complex math and coding. However, it's not being released right away like many of the other I/O Gemini features. Google DeepMind is accepting applications to test it, but it may be a while before the model exits the experimental stage. Even though I/O was wall-to-wall Gemini, Google still has much, much more AI in store. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 44 Comments #gemini #leaving #preview #just #time
    Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription
    arstechnica.com
    Google I/O? More like Google AI Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription Gemini 2.5 is rolling out everywhere, and you can pay Google $250 per month for more of it. Ryan Whitwam – May 20, 2025 5:03 pm | 44 All the new Gemini AI at I/O. Credit: Ryan Whitwam All the new Gemini AI at I/O. Credit: Ryan Whitwam Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more MOUNTAIN VIEW, Calif.—Google rolled out early versions of Gemini 2.5 earlier this year. Marking a significant improvement over the 2.0 branch. For the first time, Google's chatbot felt competitive with the likes of ChatGPT, but it's been "experimental" and later "preview" since then. At I/O 2025, Google announced general availability for Gemini 2.5, and these models will soon be integrated with Chrome. There's also a fancy new subscription plan to get the most from Google's AI. You probably won't like the pricing, though. Gemini 2.5 goes gold Even though Gemini 2.5 was revealed a few months ago, the older 2.0 Flash has been the default model all this time. Now that 2.5 is finally ready, the 2.5 Flash model will be swapped in as the new default. This model has built-in simulated reasoning, so its outputs are much more reliable than 2.0 Flash. Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June. Likewise, the Pro model is shedding its preview title, and it's getting some new goodies to celebrate. Recent updates have solidified the model's lead on the LM Arena leaderboard, which still means something to Google despite the recent drama—yes, AI benchmarking drama is a thing now. It's also getting a capability called Deep Think, which lets the model consider multiple hypotheses for every query. This apparently makes it incredibly good at math and coding. Google plans to do a little more testing on this feature before making it widely available. Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the "thinking" process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google's dev products, including Gemini Code Assist. Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it's coming to all Android and iOS devices immediately. Google demoed a future "agentic" capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It's perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn't as good, but as a glimpse of the future, it was impressive. There are also some developments in Chrome, and you guessed it, it's getting Gemini. It's not dissimilar from what you get in Edge with Copilot. There's a little Gemini icon in the corner of the browser, which you can click to access Google's chatbot. You can ask it about the pages you're browsing, have it summarize those pages, and ask follow-up questions. Google AI Ultra is ultra-expensive Since launching Gemini, Google has only had a single $20 monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google's upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google's new Google AI Ultra plan will cost $250 per month, more than the $200 plan for ChatGPT Pro. So what does your $250 get you every month? You'll get all the models included with the basic plan with much higher usage limits. If you're using video and image generation, for instance, you won't bump against any limits. Plus, Ultra comes with the newest and most expensive models. For example, Ultra subs will get immediate access to Gemini in Chrome, as well as a new agentic model capable of computer use in the Gemini API (Project Mariner). Gemini Ultra has everything from Pro, plus higher limits and instant access to new tools. Credit: Ryan Whitwam That's probably still not worth it for most Gemini users, but Google is offering a deal right now. Ultra subscribers will get a 50 percent discount for the first three months, but $125 is still a tough sell for AI. It's available in the US today and will come to other regions soon. A faster future? Google previewed what could be an important advancement in generative AI for the future. Most of the text and code-based outputs you've seen are generated from beginning to end, token by token. Its large language model (LLM) Diffusion works a bit differently for image generation, but Google is now experimenting with Gemini Diffusion. Diffusion models create images by starting with random noise and then denoise it to create what you asked for. Gemini Diffusion works similarly, generating entire blocks of tokens at the same time. The model can therefore work much faster, and it can check its work as it goes to make the final output more accurate than comparable LLMs. Google says Gemini Diffusion is 2.5 times faster than Gemini 2.5 Flash Lite, which is its fastest standard model, while also producing much better results. Google claims Gemini Diffusion is capable of previously unheard-of accuracy in complex math and coding. However, it's not being released right away like many of the other I/O Gemini features. Google DeepMind is accepting applications to test it, but it may be a while before the model exits the experimental stage. Even though I/O was wall-to-wall Gemini, Google still has much, much more AI in store. Ryan Whitwam Senior Technology Reporter Ryan Whitwam Senior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 44 Comments
    1 Комментарии ·0 Поделились ·0 предпросмотр
  • New The First Descendant feature coming in update 1.2.18 divides fans

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    The First Descendant update 1.2.18 is set to come out on May 22nd, and it is a big update set to introduce lots of new content. Some of this new content already previewed includes a new VIB boss named Icemaden, as well as a new Party Finder system that should greatly improve matchmaking. However, Nexon have now announced another new feature for The First Descendant update 1.2.18, and it’s something that has divided the TFD community.
    Nexon announces new feature for The First Descendant update 1.2.18
    On the official The First Descendant X account, Nexon has announced another new feature for update 1.2.18. This feature is the ability to “peek into other loadouts”. Basically, you’ll be able to see the modules used by another player, as well as see their dye along with makeup and all other costume cosmetics.
    Image credit: @FirstDescendant on X
    While this sounds like a good idea, there is some division amongst the community. In response to the announcement on X, there are a lot of replies against the peek. There are comments worried that it will ruin build diversity, and there are simply players who would rather keep their build a secret.
    On Reddit, one of the concerns fans have is that other players will leave matches due to not being impressed with another player’s build. As per one comment on the TFD subreddit, the “Game’s already full of crybabies who leave if they see a ‘low tier’ desc, so their concerns are legitimate”. There are also people who don’t want their unique build of cosmetics and dyes to be copied by others.
    However, in favor of player inspection being added, one comment on X points out that this feature is available in other games and was supposed to be part of TFD at launch.
    For those against player inspection, a lot of people are simply arguing that there should be an optional toggle so players can have the choice to hide their builds. This would be the perfect middleground as it would allow player inspection for those who want it, while allowing those who don’t to not have to participate.
    Unfortunately, we don’t know right now if there will be a toggle for player inspection. Fortunately, we only have to wait until May 22nd to find out.
    For more The First Descendant, we have a guide to the best skills and loadout for Viessa, along with the best skills, gear, and mods for the hugely popular Bunny. We also have a guide for Freyna along with fundamental tips for beginners.

    The First Descendant

    Platform:
    PC, PlayStation 4, PlayStation 5, Xbox Series S, Xbox Series X

    Genre:
    Action, Adventure, RPG

    5
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #new #first #descendant #feature #coming
    New The First Descendant feature coming in update 1.2.18 divides fans
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The First Descendant update 1.2.18 is set to come out on May 22nd, and it is a big update set to introduce lots of new content. Some of this new content already previewed includes a new VIB boss named Icemaden, as well as a new Party Finder system that should greatly improve matchmaking. However, Nexon have now announced another new feature for The First Descendant update 1.2.18, and it’s something that has divided the TFD community. Nexon announces new feature for The First Descendant update 1.2.18 On the official The First Descendant X account, Nexon has announced another new feature for update 1.2.18. This feature is the ability to “peek into other loadouts”. Basically, you’ll be able to see the modules used by another player, as well as see their dye along with makeup and all other costume cosmetics. Image credit: @FirstDescendant on X While this sounds like a good idea, there is some division amongst the community. In response to the announcement on X, there are a lot of replies against the peek. There are comments worried that it will ruin build diversity, and there are simply players who would rather keep their build a secret. On Reddit, one of the concerns fans have is that other players will leave matches due to not being impressed with another player’s build. As per one comment on the TFD subreddit, the “Game’s already full of crybabies who leave if they see a ‘low tier’ desc, so their concerns are legitimate”. There are also people who don’t want their unique build of cosmetics and dyes to be copied by others. However, in favor of player inspection being added, one comment on X points out that this feature is available in other games and was supposed to be part of TFD at launch. For those against player inspection, a lot of people are simply arguing that there should be an optional toggle so players can have the choice to hide their builds. This would be the perfect middleground as it would allow player inspection for those who want it, while allowing those who don’t to not have to participate. Unfortunately, we don’t know right now if there will be a toggle for player inspection. Fortunately, we only have to wait until May 22nd to find out. For more The First Descendant, we have a guide to the best skills and loadout for Viessa, along with the best skills, gear, and mods for the hugely popular Bunny. We also have a guide for Freyna along with fundamental tips for beginners. The First Descendant Platform: PC, PlayStation 4, PlayStation 5, Xbox Series S, Xbox Series X Genre: Action, Adventure, RPG 5 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #new #first #descendant #feature #coming
    New The First Descendant feature coming in update 1.2.18 divides fans
    www.videogamer.com
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The First Descendant update 1.2.18 is set to come out on May 22nd, and it is a big update set to introduce lots of new content. Some of this new content already previewed includes a new VIB boss named Icemaden, as well as a new Party Finder system that should greatly improve matchmaking. However, Nexon have now announced another new feature for The First Descendant update 1.2.18, and it’s something that has divided the TFD community. Nexon announces new feature for The First Descendant update 1.2.18 On the official The First Descendant X account, Nexon has announced another new feature for update 1.2.18. This feature is the ability to “peek into other loadouts”. Basically, you’ll be able to see the modules used by another player, as well as see their dye along with makeup and all other costume cosmetics. Image credit: @FirstDescendant on X While this sounds like a good idea, there is some division amongst the community. In response to the announcement on X, there are a lot of replies against the peek. There are comments worried that it will ruin build diversity, and there are simply players who would rather keep their build a secret. On Reddit, one of the concerns fans have is that other players will leave matches due to not being impressed with another player’s build. As per one comment on the TFD subreddit, the “Game’s already full of crybabies who leave if they see a ‘low tier’ desc, so their concerns are legitimate”. There are also people who don’t want their unique build of cosmetics and dyes to be copied by others. However, in favor of player inspection being added, one comment on X points out that this feature is available in other games and was supposed to be part of TFD at launch. For those against player inspection, a lot of people are simply arguing that there should be an optional toggle so players can have the choice to hide their builds. This would be the perfect middleground as it would allow player inspection for those who want it, while allowing those who don’t to not have to participate. Unfortunately, we don’t know right now if there will be a toggle for player inspection. Fortunately, we only have to wait until May 22nd to find out. For more The First Descendant, we have a guide to the best skills and loadout for Viessa, along with the best skills, gear, and mods for the hugely popular Bunny. We also have a guide for Freyna along with fundamental tips for beginners. The First Descendant Platform(s): PC, PlayStation 4, PlayStation 5, Xbox Series S, Xbox Series X Genre(s): Action, Adventure, RPG 5 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Комментарии ·0 Поделились ·0 предпросмотр
CGShares https://cgshares.com