• Hey, amazing gamers! Are you ready to dive into a world where every challenge is a chance to shine? The latest installment, "Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles 2," is here to reignite the excitement in a genre that some say is slowing down!

    With its captivating story and stunning visuals, this game proves that even in tough times, we can be pillars of strength and inspiration! Let's embrace the adventure and show the world what we're made of! Remember, every battle is a step towards greatness!

    Get ready to unleash your inner hero, and let’s slay those demons together!

    #D
    🎮✨ Hey, amazing gamers! Are you ready to dive into a world where every challenge is a chance to shine? 🌟 The latest installment, "Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles 2," is here to reignite the excitement in a genre that some say is slowing down! 💪🔥 With its captivating story and stunning visuals, this game proves that even in tough times, we can be pillars of strength and inspiration! 🌈 Let's embrace the adventure and show the world what we're made of! Remember, every battle is a step towards greatness! 💖 Get ready to unleash your inner hero, and let’s slay those demons together! 🚀🎉 #D
    WWW.ACTUGAMING.NET
    Test Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles 2 : De quoi être un Pilier d’un genre qui s’essouffle ?
    ActuGaming.net Test Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles 2 : De quoi être un Pilier d’un genre qui s’essouffle ? En s’écartant de l’univers de Naruto pour s’intéresser à celui de Demon Slayer,
    Like
    Love
    Wow
    Sad
    87
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Are you tired of your delivery slowing down because of the infamous Conway effect? Fear not, the Duck Conf 2025 has the answer! Join us to learn how to invert Conway's law—because who wouldn’t want to untangle architecture from teams while pretending to care about business domains? It’s like trying to teach cats to swim; amusingly futile but oh so trendy! Let’s structure our organizations around value, inspired by Team Topologies and strategic DDD—whatever that means. After all, who needs clarity when you can just throw jargon around?

    #DuckConf2025 #ConwayEffect #Agile #TeamTopologies #ValueDrivenDesign
    Are you tired of your delivery slowing down because of the infamous Conway effect? Fear not, the Duck Conf 2025 has the answer! Join us to learn how to invert Conway's law—because who wouldn’t want to untangle architecture from teams while pretending to care about business domains? It’s like trying to teach cats to swim; amusingly futile but oh so trendy! Let’s structure our organizations around value, inspired by Team Topologies and strategic DDD—whatever that means. After all, who needs clarity when you can just throw jargon around? #DuckConf2025 #ConwayEffect #Agile #TeamTopologies #ValueDrivenDesign
    Duck Conf 2025 - CR - Déjouer les pièges de Conway dans l'agilité à l'échelle
    Et si votre delivery ralentissait à cause de l’effet Conway ? Ce talk montre comment inverser la loi de Conway pour découpler architecture et équipes, structurer par domaine métier, et créer une organisation centrée sur la valeur, inspirée de Team To
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Do you think Sony will make support for their rumored new handheld mandatory for developers?

    Red Kong XIX
    Member

    Oct 11, 2020

    13,560

    This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll.
    Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially. 

    Hero_of_the_Day
    Avenger

    Oct 27, 2017

    19,958

    Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.
     

    Homura
    ▲ Legend ▲
    Member

    Aug 20, 2019

    7,232

    As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games.

    Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility. 

    shadowman16
    Member

    Oct 25, 2017

    42,292

    Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds.

    I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing 

    Modest_Modsoul
    Living the Dreams
    Member

    Oct 29, 2017

    28,418


     

    setmymindforopensky
    Member

    Apr 20, 2025

    67

    a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough.

    im guessing PSTV situation. support it or not we dont care. 

    reksveks
    Member

    May 17, 2022

    7,628

    Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it.

    It's going to be an interesting solution to an interesting problem 

    Servbot24
    The Fallen

    Oct 25, 2017

    47,826

    Obviously not. Pretty absurd question tbh.
     

    RivalGT
    Member

    Dec 13, 2017

    7,616

    This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work.

    Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode.

    Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think. 

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    shadowman16 said:

    Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds.

    I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thingClick to expand...
    Click to shrink...

    depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue
     

    Pheonix1
    Member

    Jun 22, 2024

    716

    Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.
     

    skeezx
    Member

    Oct 27, 2017

    23,994

    guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly.

    i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all 

    AmFreak
    Member

    Oct 26, 2017

    3,245

    They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success.

    The best way to do this is to make it another SKU of the contemporary console. And witheverything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past. 

    Ruck
    Member

    Oct 25, 2017

    3,105

    I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no
     

    TitanicFall
    Member

    Nov 12, 2017

    9,340

    Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.
     

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise.

    PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs 

    bleits
    Member

    Oct 14, 2023

    373

    They have to if they want to be taken seriously
     

    Vic Damone Jr.
    Member

    Oct 27, 2017

    20,534

    Nope Sony doesn't mandate this stuff and it's why their second product always dies.
     

    fiendcode
    Member

    Oct 26, 2017

    26,514

    I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely. Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.
     

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    bleits said:

    They have to if they want to be taken seriously

    Click to expand...
    Click to shrink...

    from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally.

    PS consider their high fidelity visual as advantage and differentiator from Nintendo.

    so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices.

    so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo 

    danm999
    Member

    Oct 29, 2017

    19,929

    Sydney

    Incentives, not mandates.
     

    NSESN
    ▲ Legend ▲
    Member

    Oct 25, 2017

    27,729

    I think people are setting themselves for disappointment in regards for how powerful this thing will be
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    Depends on what they call it.

    If they call it anything related to ps6, expect very bad performance, and mandates

    If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end

    If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it. 

    Metnut
    Member

    Apr 7, 2025

    30

    Good question OP.

    I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches. 

    mute
    ▲ Legend ▲
    Member

    Oct 25, 2017

    29,807

    I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.
     

    Patison
    Member

    Oct 27, 2017

    761

    It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch routeor more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all.

    Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc.

    And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around. 

    Jammerz
    Member

    Apr 29, 2023

    1,579

    I think it will be optional support.

    However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down. 

    Hamchan
    The Fallen

    Oct 25, 2017

    6,000

    I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.
     

    Advance.Wars.Sgt.
    Member

    Jun 10, 2018

    10,456

    Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.
     

    overthewaves
    Member

    Sep 30, 2020

    1,203

    Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.
     

    Neonvisions
    Member

    Oct 27, 2017

    707

    overthewaves said:

    Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.

    Click to expand...
    Click to shrink...

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? 

    Gwarm
    Member

    Nov 13, 2017

    2,902

    I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.
     

    bloopland33
    Member

    Mar 4, 2020

    3,845

    I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery.

    This would be in addition to whatever efforts they're doing to make things work out of the box, of course.

    But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now…

    ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything. So maybe they Steam Deck it 

    vivftp
    Member

    Oct 29, 2017

    23,016

    My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs

    I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation 

    Mocha Joe
    Member

    Jun 2, 2021

    13,636

    Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.
     

    overthewaves
    Member

    Sep 30, 2020

    1,203

    Neonvisions said:

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?

    Click to expand...
    Click to shrink...

    I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".
     

    reksveks
    Member

    May 17, 2022

    7,628

    Neonvisions said:

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?

    Click to expand...
    Click to shrink...

    Or the perception is that it does but the truth is that there is a lot of factors
     

    Fabs
    Member

    Aug 22, 2019

    2,827

    I can't see the forcing handheld and pro support next gen.
     

    level
    Member

    May 25, 2023

    1,427

    Definitely not

    Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers. 

    gofreak
    Member

    Oct 26, 2017

    8,411

    I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.
     

    Caiusto
    Member

    Oct 25, 2017

    7,086

    If they don't want to end up with another Vita yes they will.
     

    mute
    ▲ Legend ▲
    Member

    Oct 25, 2017

    29,807

    Advance.Wars.Sgt. said:

    Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.

    Click to expand...
    Click to shrink...

    It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.
     

    AmFreak
    Member

    Oct 26, 2017

    3,245

    mute said:

    It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.

    Click to expand...
    Click to shrink...

    Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5.

    I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching. 

    Jawmuncher
    Crisis Dino
    Moderator

    Oct 25, 2017

    45,166

    Ibis Island

    No, I think the portable will handle portable stuff "automatically" for what it converts
     

    knightmawk
    Member

    Dec 12, 2018

    8,900

    I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back.

    That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception. 

    RivalGT
    Member

    Dec 13, 2017

    7,616

    I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap.

    What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work. 

    Vexii
    Member

    Oct 31, 2017

    3,103

    UK

    if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists
     

    Mobius and Pet Octopus
    Member

    Oct 25, 2017

    17,065

    Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.
     

    SeanMN
    Member

    Oct 28, 2017

    2,437

    If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support.

    If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku.

    My thoughts are this should be a PS6 and support the same as the primary console. 
    #you #think #sony #will #make
    Do you think Sony will make support for their rumored new handheld mandatory for developers?
    Red Kong XIX Member Oct 11, 2020 13,560 This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll. Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially.  Hero_of_the_Day Avenger Oct 27, 2017 19,958 Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.   Homura ▲ Legend ▲ Member Aug 20, 2019 7,232 As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games. Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility.  shadowman16 Member Oct 25, 2017 42,292 Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds. I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing  Modest_Modsoul Living the Dreams Member Oct 29, 2017 28,418 🤷‍♂️   setmymindforopensky Member Apr 20, 2025 67 a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough. im guessing PSTV situation. support it or not we dont care.  reksveks Member May 17, 2022 7,628 Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it. It's going to be an interesting solution to an interesting problem  Servbot24 The Fallen Oct 25, 2017 47,826 Obviously not. Pretty absurd question tbh.   RivalGT Member Dec 13, 2017 7,616 This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work. Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode. Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think.  Callibretto Member Oct 25, 2017 10,445 Indonesia shadowman16 said: Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds. I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thingClick to expand... Click to shrink... depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue   Pheonix1 Member Jun 22, 2024 716 Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.   skeezx Member Oct 27, 2017 23,994 guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly. i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all  AmFreak Member Oct 26, 2017 3,245 They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success. The best way to do this is to make it another SKU of the contemporary console. And witheverything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past.  Ruck Member Oct 25, 2017 3,105 I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no   TitanicFall Member Nov 12, 2017 9,340 Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.   Callibretto Member Oct 25, 2017 10,445 Indonesia imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise. PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs  bleits Member Oct 14, 2023 373 They have to if they want to be taken seriously   Vic Damone Jr. Member Oct 27, 2017 20,534 Nope Sony doesn't mandate this stuff and it's why their second product always dies.   fiendcode Member Oct 26, 2017 26,514 I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely. Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.   Callibretto Member Oct 25, 2017 10,445 Indonesia bleits said: They have to if they want to be taken seriously Click to expand... Click to shrink... from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally. PS consider their high fidelity visual as advantage and differentiator from Nintendo. so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices. so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo  danm999 Member Oct 29, 2017 19,929 Sydney Incentives, not mandates.   NSESN ▲ Legend ▲ Member Oct 25, 2017 27,729 I think people are setting themselves for disappointment in regards for how powerful this thing will be   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin Depends on what they call it. If they call it anything related to ps6, expect very bad performance, and mandates If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it.  Metnut Member Apr 7, 2025 30 Good question OP. I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches.  mute ▲ Legend ▲ Member Oct 25, 2017 29,807 I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.   Patison Member Oct 27, 2017 761 It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch routeor more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all. Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc. And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around.  Jammerz Member Apr 29, 2023 1,579 I think it will be optional support. However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down.  Hamchan The Fallen Oct 25, 2017 6,000 I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.   Advance.Wars.Sgt. Member Jun 10, 2018 10,456 Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.   overthewaves Member Sep 30, 2020 1,203 Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.   Neonvisions Member Oct 27, 2017 707 overthewaves said: Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag. Click to expand... Click to shrink... How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?  Gwarm Member Nov 13, 2017 2,902 I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.   bloopland33 Member Mar 4, 2020 3,845 I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery. This would be in addition to whatever efforts they're doing to make things work out of the box, of course. But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now… ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything. So maybe they Steam Deck it  vivftp Member Oct 29, 2017 23,016 My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation  Mocha Joe Member Jun 2, 2021 13,636 Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.   overthewaves Member Sep 30, 2020 1,203 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".   reksveks Member May 17, 2022 7,628 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... Or the perception is that it does but the truth is that there is a lot of factors   Fabs Member Aug 22, 2019 2,827 I can't see the forcing handheld and pro support next gen.   level Member May 25, 2023 1,427 Definitely not Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers.  gofreak Member Oct 26, 2017 8,411 I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.   Caiusto Member Oct 25, 2017 7,086 If they don't want to end up with another Vita yes they will.   mute ▲ Legend ▲ Member Oct 25, 2017 29,807 Advance.Wars.Sgt. said: Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind. Click to expand... Click to shrink... It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.   AmFreak Member Oct 26, 2017 3,245 mute said: It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example. Click to expand... Click to shrink... Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".   Spoit Member Oct 28, 2017 5,599 Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5. I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 45,166 Ibis Island No, I think the portable will handle portable stuff "automatically" for what it converts   knightmawk Member Dec 12, 2018 8,900 I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back. That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception.  RivalGT Member Dec 13, 2017 7,616 I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap. What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work.  Vexii Member Oct 31, 2017 3,103 UK if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists   Mobius and Pet Octopus Member Oct 25, 2017 17,065 Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.   SeanMN Member Oct 28, 2017 2,437 If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support. If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku. My thoughts are this should be a PS6 and support the same as the primary console.  #you #think #sony #will #make
    WWW.RESETERA.COM
    Do you think Sony will make support for their rumored new handheld mandatory for developers?
    Red Kong XIX Member Oct 11, 2020 13,560 This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll. Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially.  Hero_of_the_Day Avenger Oct 27, 2017 19,958 Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.   Homura ▲ Legend ▲ Member Aug 20, 2019 7,232 As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games. Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility.  shadowman16 Member Oct 25, 2017 42,292 Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds (which considering how people hated cross gen for that reason, they'd hate it here as well). I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing (considering how shit Sony is at supporting its peripherals - like the Vita or PSVR2)  Modest_Modsoul Living the Dreams Member Oct 29, 2017 28,418 🤷‍♂️   setmymindforopensky Member Apr 20, 2025 67 a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough. im guessing PSTV situation. support it or not we dont care.  reksveks Member May 17, 2022 7,628 Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it. It's going to be an interesting solution to an interesting problem  Servbot24 The Fallen Oct 25, 2017 47,826 Obviously not. Pretty absurd question tbh.   RivalGT Member Dec 13, 2017 7,616 This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work. Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode. Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think.  Callibretto Member Oct 25, 2017 10,445 Indonesia shadowman16 said: Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds (which considering how people hated cross gen for that reason, they'd hate it here as well). I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing (considering how shit Sony is at supporting its peripherals - like the Vita or PSVR2) Click to expand... Click to shrink... depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue   Pheonix1 Member Jun 22, 2024 716 Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.   skeezx Member Oct 27, 2017 23,994 guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly. i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all  AmFreak Member Oct 26, 2017 3,245 They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success. The best way to do this is to make it another SKU of the contemporary console. And with (close to) everything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past.  Ruck Member Oct 25, 2017 3,105 I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no   TitanicFall Member Nov 12, 2017 9,340 Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.   Callibretto Member Oct 25, 2017 10,445 Indonesia imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise. PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs  bleits Member Oct 14, 2023 373 They have to if they want to be taken seriously   Vic Damone Jr. Member Oct 27, 2017 20,534 Nope Sony doesn't mandate this stuff and it's why their second product always dies.   fiendcode Member Oct 26, 2017 26,514 I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely (PSP3?). Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.   Callibretto Member Oct 25, 2017 10,445 Indonesia bleits said: They have to if they want to be taken seriously Click to expand... Click to shrink... from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally. PS consider their high fidelity visual as advantage and differentiator from Nintendo. so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices. so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo  danm999 Member Oct 29, 2017 19,929 Sydney Incentives, not mandates.   NSESN ▲ Legend ▲ Member Oct 25, 2017 27,729 I think people are setting themselves for disappointment in regards for how powerful this thing will be   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin Depends on what they call it. If they call it anything related to ps6, expect very bad performance, and mandates If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it.  Metnut Member Apr 7, 2025 30 Good question OP. I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches.  mute ▲ Legend ▲ Member Oct 25, 2017 29,807 I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.   Patison Member Oct 27, 2017 761 It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch route (or PS Vita/PS TV route) or more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all. Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc. And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around.  Jammerz Member Apr 29, 2023 1,579 I think it will be optional support. However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down.  Hamchan The Fallen Oct 25, 2017 6,000 I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.   Advance.Wars.Sgt. Member Jun 10, 2018 10,456 Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.   overthewaves Member Sep 30, 2020 1,203 Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.   Neonvisions Member Oct 27, 2017 707 overthewaves said: Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag. Click to expand... Click to shrink... How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?  Gwarm Member Nov 13, 2017 2,902 I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.   bloopland33 Member Mar 4, 2020 3,845 I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery. This would be in addition to whatever efforts they're doing to make things work out of the box, of course. But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now… ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything (depending on how they position it). So maybe they Steam Deck it  vivftp Member Oct 29, 2017 23,016 My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation  Mocha Joe Member Jun 2, 2021 13,636 Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.   overthewaves Member Sep 30, 2020 1,203 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".   reksveks Member May 17, 2022 7,628 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... Or the perception is that it does but the truth is that there is a lot of factors   Fabs Member Aug 22, 2019 2,827 I can't see the forcing handheld and pro support next gen.   level Member May 25, 2023 1,427 Definitely not Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers.  gofreak Member Oct 26, 2017 8,411 I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.   Caiusto Member Oct 25, 2017 7,086 If they don't want to end up with another Vita yes they will.   mute ▲ Legend ▲ Member Oct 25, 2017 29,807 Advance.Wars.Sgt. said: Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind. Click to expand... Click to shrink... It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.   AmFreak Member Oct 26, 2017 3,245 mute said: It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example. Click to expand... Click to shrink... Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".   Spoit Member Oct 28, 2017 5,599 Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5. I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 45,166 Ibis Island No, I think the portable will handle portable stuff "automatically" for what it converts   knightmawk Member Dec 12, 2018 8,900 I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back. That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception.  RivalGT Member Dec 13, 2017 7,616 I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap. What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work.  Vexii Member Oct 31, 2017 3,103 UK if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists   Mobius and Pet Octopus Member Oct 25, 2017 17,065 Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.   SeanMN Member Oct 28, 2017 2,437 If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support. If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku. My thoughts are this should be a PS6 and support the same as the primary console. 
    0 Commentarii 0 Distribuiri 0 previzualizare
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Commentarii 0 Distribuiri 0 previzualizare
  • YouTube might slow down your videos if you block ads

    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay upto make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos.
    Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite.
    Google
    Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension.
    I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon.

    YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the siteare getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.”
    If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either.
    And to be clear, I have no problem paying for the stuff I watch. I already pay more than a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them.
    It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again.
    #youtube #might #slow #down #your
    YouTube might slow down your videos if you block ads
    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay upto make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos. Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite. Google Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension. I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon. YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the siteare getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.” If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either. And to be clear, I have no problem paying for the stuff I watch. I already pay more than a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them. It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again. #youtube #might #slow #down #your
    WWW.PCWORLD.COM
    YouTube might slow down your videos if you block ads
    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay up (quite a lot) to make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos. Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite. Google Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension. I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon. YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the site (again, without paying the steep fees for YouTube Premium) are getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.” If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging $14 per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either. And to be clear, I have no problem paying for the stuff I watch. I already pay more than $15 a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them. It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More

    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys. 
    At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5. 

    Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th.
    “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching somethingwhen you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.” 

    Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027. 
    Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism.
    Is Pixar so back? Here’s what we learned from the presentation and footage… 
    Toy Story 5 – June 19, 2026 
    Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles.
    Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads. 
    “With some films, you’ll struggle to find new things to talk about. And you know, this is. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says.

    Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    HOPPERS – © 2025 Disney/Pixar. All Rights Reserved.
    Hoppers – March 6, 2026 
    Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers. 
    The film follows Mabel, a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George, and she soon learns that the animal world is a lot more complex than she had thought. 
    The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in. 
    GATTO – © 2025 Disney/Pixar. All Rights Reserved.
    Gatto – Summer 2027 
    In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style.
    The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider.
    #pixar #slate #reveal #what #learned
    Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More
    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys.  At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5.  Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th. “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching somethingwhen you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.”  Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027.  Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism. Is Pixar so back? Here’s what we learned from the presentation and footage…  Toy Story 5 – June 19, 2026  Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles. Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads.  “With some films, you’ll struggle to find new things to talk about. And you know, this is. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says. Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment. Join our mailing list Get the best of Den of Geek delivered right to your inbox! HOPPERS – © 2025 Disney/Pixar. All Rights Reserved. Hoppers – March 6, 2026  Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers.  The film follows Mabel, a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George, and she soon learns that the animal world is a lot more complex than she had thought.  The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in.  GATTO – © 2025 Disney/Pixar. All Rights Reserved. Gatto – Summer 2027  In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style. The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider. #pixar #slate #reveal #what #learned
    WWW.DENOFGEEK.COM
    Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More
    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys.  At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5.  Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th. “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching something [compared to] when you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.”  Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027.  Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism. Is Pixar so back? Here’s what we learned from the presentation and footage…  Toy Story 5 – June 19, 2026  Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles. Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads.  “With some films, you’ll struggle to find new things to talk about. And you know, this is [Toy Story 5]. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says. Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment. Join our mailing list Get the best of Den of Geek delivered right to your inbox! HOPPERS – © 2025 Disney/Pixar. All Rights Reserved. Hoppers – March 6, 2026  Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers.  The film follows Mabel (Piper Curda), a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George (Bobby Moynihan), and she soon learns that the animal world is a lot more complex than she had thought.  The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in.  GATTO – © 2025 Disney/Pixar. All Rights Reserved. Gatto – Summer 2027  In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style. The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES

    By CHRIS McGOWAN

    Images courtesy of Warner Bros. Pictures.

    Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors.

    “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”
    —Nordin Rahhali, VFX Supervisor

    The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed.

    “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.”

    “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.”

    Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor.

    “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”

    The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.”
    —Christian Sebaldt, ASC, Director of Photography

    For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day”

    Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”

    Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall.

    The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.”

    The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.”

    Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils.

    “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”
    —Nordin Rahhali, VFX Supervisor

    Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.”

    To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.”

    Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine.

    Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard.

    A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.”

    Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films.

    From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    #explosive #mix #sfx #vfx #ignites
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.” #explosive #mix #sfx #vfx #ignites
    WWW.VFXVOICE.COM
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbell (Brec Bassinger) has a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes (Kaitlyn Santa Juana), inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the whole [Skyview restaurant] on fire, but Tony [Lazarowich, Special Effects Supervisor] tried and put as much fire as he could safely and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots. (Photo: Eric Milner) Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive set [that] was fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical. (Photo: Eric Milner) “We got all the Vancouver skyline [with drones] so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height [we needed]. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wall [so] we could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed. (Photo: Eric Milner) “We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineered [them] while we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots. [For example,] some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Max Lloyd-Jones] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbell (Max Lloyd-Jones) as he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful location [in] GVRD [Greater Vancouver], very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosion [of Iris’s home] was unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbell (Richard Harmon) and drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producer [Craig Perry] came up with a great gag [for the] septum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell (Richard Harmon) – with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “[S]ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Campbell] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire line [for] when Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result. (Photo: Eric Milner) A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erik (Richard Harmon) appears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws it [off the deck] are all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines. (Photo: Eric Milner) Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris. (Photo: Eric Milner) Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    0 Commentarii 0 Distribuiri 0 previzualizare
  • NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI

    Industrial AI isn’t slowing down. Germany is ready.
    Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud.
    This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics.
    “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”
    “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”
    This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software.
    NEURA Robotics’ training center for cognitive robots.
    NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure.
    At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments.
    “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”
    Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts.
    Driving Germany’s Industrial Ecosystem
    Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem.
    Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale.
    Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps.
    A Speedboat Toward AI Gigafactories
    The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey.
    The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners.
    Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers.
    As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources.
    NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities.
    Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications.
    Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    #nvidia #deutsche #telekom #partner #advance
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay. #nvidia #deutsche #telekom #partner #advance
    BLOGS.NVIDIA.COM
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Tech layoffs surge even as US unemployment remains stable

    Although the US unemployment rate held steady at 4.2% in May with 139,000 jobs added to the US workforce, nearly 100,000 layoffs were also announced — up 47% from last year, according to new data from the US Bureau of Labor Statistics and others. Tech and federal cuts led the way in layoffs, driven by economic pressure, programmatic firings and AI-driven shifts in workforce needs, according to outplacement firm Challenger, Gray & Christmas.

    Technology remains a top sector for cuts amid ongoing disruptions, according to the firm’s data. In May, tech companies announced 10,598 layoffs, bringing the 2025 total to 74,716; that’s up 35% from 55,207 at the same time last year.

    “Tariffs, funding cuts, consumer spending, and overall economic pessimism are putting intense pressure on companies’ workforces. Companies are spending less, slowing hiring, and sending layoff notices,” Andrew Challenger, senior vice president of Challenger, Gray & Christmas, said in a statement.

    Uneasiness continues to weigh on tech hiring, according to CompTIA, a provider of IT training and certification products. The unemployment rate for tech jobs in May was 3.4%, roughly in line with April’s 3.5%, CompTIA data showed. The tech unemployment rate continues to sit below the national rate.

    CompTIA

    Tech sector companies added a modest 1,571 net new employees in May, analysis of the BLS jobs report by CompTIA showed. Job growth in cloud infrastructure and tech services was offset by reductions in the telecommunications sector.

    Tech employment across the broader economy declined by an estimated 131,000 positions. “With prior month employment gains, tech occupation employment remains in the positive for the year,” CompTIA said.

    “It is undoubtedly a challenging time for employers and job seekers facing uncertainty on multiple fronts,” said Tim Herbert, CompTIA’s chief research officer. “At the same time, it requires taking a measured approach given the data continues to hold up reasonably well.”

    One bright spot for tech hires in May was the finance and insurance industry, which collectively saw a 21% increase in new tech job postings; new tech job openings also rose by 16% in the retail sector, according to CompTIA.

    Even so, tech layoffs have continued as AI adoption soars and economic pressures drive a major shift toward new roles and skills in the workforce. “AI isn’t replacing jobs,” said Kye Mitchell, president of tech workforce staffing firm Experis US. “It’s fundamentally redefining how work gets done. We’re seeing AI augment skillsets and make professionals more capable, faster, and able to focus on higher-value work.”

    Technology only displaces jobs when about 80% of tasks can be automated — and AI isn’t close to doing that, said Mitchell. Right now, AI is enhancing skills, boosting productivity, and freeing up time for higher-value work.

    Hiring for AI positions and those requiring AI skills continues to grow rapidly, according to a CompTIA analysis of data from Lightcast and Stanford University study. CompTIA found that employer job postings related to AI are up 117% year-to-date year-over-year.

    Challenger, Gray & Christmas

    Skills-based hiring remains core to many employers’ recruiting strategies. About half of all tech job postings did not specify a need for a four-year academic degree, seeking instead a combination of work experience, training and industry-recognized certification, according to CompTIA’s and other data.

    Even so, employers are hesitant to hire. “Economic uncertainty is absolutely creating a cautious hiring environment, but it’s more complex than tariffs alone,” Mitchell said. “Our data shows employers adopting a ‘wait and watch’ stance as they monitor economic signals, with job openings down 11% year-over-year.”

    Still, the tech job market is adjusting as AI adoption grows. AI skill mentions in job postings fell 10% in May but are still up 10% for the year, showing steady demand, Mitchell said.

    The tech industry had been nearly bullet-proof from mass layoffs prior to 2022. After a hiring surge between 2020 and 2022 to meet digitization efforts as more people worked from home, the market shifted and began slashing jobs to readjust to the new reality.

    Tech companies such Google, Amazon, Meta  and others laid off tens of thousands of workers  as an adjustment to over-hiring during the COVID-19 pandemic. In 2023 alone, 1,186 tech companies laid off about 262,682 staff, compared to 164,969 layoffs in 2022.

    In January 2024, job cuts leaped 136% over December and hit a 10-month high, according to Challenger, Gray & Christmas.

    While the labor market remained steady, there are signs that hiring across the board is softening. Open job postings fell 7% this year and new postings dropped 16% in the past month — the first full contraction of 2025. Year-to-date, new postings are flat compared to last year, according to Ger Doyle, ManpowerGroup’s regional president for North America. Doyle, however, was optimistic.

    “This is a chill, not a freeze,” he said. “Workers and employers are holding steady, awaiting clarity.”

    For example, he said, project management roles are up 483% year-over-year, and as the broader outlook improves, a rebound could follow, he added.

     Demand for data roles is surging as companies shift from AI experiments to execution. Database architect postings are up 2,140% year-over-year, with data scientist postings up 280% — clear signs of companies building the backbone for an AI-driven future, Experis’s data showed.

    “This shift is also reshaping how talent enters the industry. Entry-level opportunities are becoming more limited, making it harder for recent graduates to gain a foothold,” Mitchell said. “For those looking to break in, deep analytical and technical skills are no longer optional.”
    #tech #layoffs #surge #even #unemployment
    Tech layoffs surge even as US unemployment remains stable
    Although the US unemployment rate held steady at 4.2% in May with 139,000 jobs added to the US workforce, nearly 100,000 layoffs were also announced — up 47% from last year, according to new data from the US Bureau of Labor Statistics and others. Tech and federal cuts led the way in layoffs, driven by economic pressure, programmatic firings and AI-driven shifts in workforce needs, according to outplacement firm Challenger, Gray & Christmas. Technology remains a top sector for cuts amid ongoing disruptions, according to the firm’s data. In May, tech companies announced 10,598 layoffs, bringing the 2025 total to 74,716; that’s up 35% from 55,207 at the same time last year. “Tariffs, funding cuts, consumer spending, and overall economic pessimism are putting intense pressure on companies’ workforces. Companies are spending less, slowing hiring, and sending layoff notices,” Andrew Challenger, senior vice president of Challenger, Gray & Christmas, said in a statement. Uneasiness continues to weigh on tech hiring, according to CompTIA, a provider of IT training and certification products. The unemployment rate for tech jobs in May was 3.4%, roughly in line with April’s 3.5%, CompTIA data showed. The tech unemployment rate continues to sit below the national rate. CompTIA Tech sector companies added a modest 1,571 net new employees in May, analysis of the BLS jobs report by CompTIA showed. Job growth in cloud infrastructure and tech services was offset by reductions in the telecommunications sector. Tech employment across the broader economy declined by an estimated 131,000 positions. “With prior month employment gains, tech occupation employment remains in the positive for the year,” CompTIA said. “It is undoubtedly a challenging time for employers and job seekers facing uncertainty on multiple fronts,” said Tim Herbert, CompTIA’s chief research officer. “At the same time, it requires taking a measured approach given the data continues to hold up reasonably well.” One bright spot for tech hires in May was the finance and insurance industry, which collectively saw a 21% increase in new tech job postings; new tech job openings also rose by 16% in the retail sector, according to CompTIA. Even so, tech layoffs have continued as AI adoption soars and economic pressures drive a major shift toward new roles and skills in the workforce. “AI isn’t replacing jobs,” said Kye Mitchell, president of tech workforce staffing firm Experis US. “It’s fundamentally redefining how work gets done. We’re seeing AI augment skillsets and make professionals more capable, faster, and able to focus on higher-value work.” Technology only displaces jobs when about 80% of tasks can be automated — and AI isn’t close to doing that, said Mitchell. Right now, AI is enhancing skills, boosting productivity, and freeing up time for higher-value work. Hiring for AI positions and those requiring AI skills continues to grow rapidly, according to a CompTIA analysis of data from Lightcast and Stanford University study. CompTIA found that employer job postings related to AI are up 117% year-to-date year-over-year. Challenger, Gray & Christmas Skills-based hiring remains core to many employers’ recruiting strategies. About half of all tech job postings did not specify a need for a four-year academic degree, seeking instead a combination of work experience, training and industry-recognized certification, according to CompTIA’s and other data. Even so, employers are hesitant to hire. “Economic uncertainty is absolutely creating a cautious hiring environment, but it’s more complex than tariffs alone,” Mitchell said. “Our data shows employers adopting a ‘wait and watch’ stance as they monitor economic signals, with job openings down 11% year-over-year.” Still, the tech job market is adjusting as AI adoption grows. AI skill mentions in job postings fell 10% in May but are still up 10% for the year, showing steady demand, Mitchell said. The tech industry had been nearly bullet-proof from mass layoffs prior to 2022. After a hiring surge between 2020 and 2022 to meet digitization efforts as more people worked from home, the market shifted and began slashing jobs to readjust to the new reality. Tech companies such Google, Amazon, Meta  and others laid off tens of thousands of workers  as an adjustment to over-hiring during the COVID-19 pandemic. In 2023 alone, 1,186 tech companies laid off about 262,682 staff, compared to 164,969 layoffs in 2022. In January 2024, job cuts leaped 136% over December and hit a 10-month high, according to Challenger, Gray & Christmas. While the labor market remained steady, there are signs that hiring across the board is softening. Open job postings fell 7% this year and new postings dropped 16% in the past month — the first full contraction of 2025. Year-to-date, new postings are flat compared to last year, according to Ger Doyle, ManpowerGroup’s regional president for North America. Doyle, however, was optimistic. “This is a chill, not a freeze,” he said. “Workers and employers are holding steady, awaiting clarity.” For example, he said, project management roles are up 483% year-over-year, and as the broader outlook improves, a rebound could follow, he added.  Demand for data roles is surging as companies shift from AI experiments to execution. Database architect postings are up 2,140% year-over-year, with data scientist postings up 280% — clear signs of companies building the backbone for an AI-driven future, Experis’s data showed. “This shift is also reshaping how talent enters the industry. Entry-level opportunities are becoming more limited, making it harder for recent graduates to gain a foothold,” Mitchell said. “For those looking to break in, deep analytical and technical skills are no longer optional.” #tech #layoffs #surge #even #unemployment
    WWW.COMPUTERWORLD.COM
    Tech layoffs surge even as US unemployment remains stable
    Although the US unemployment rate held steady at 4.2% in May with 139,000 jobs added to the US workforce, nearly 100,000 layoffs were also announced — up 47% from last year, according to new data from the US Bureau of Labor Statistics and others. Tech and federal cuts led the way in layoffs, driven by economic pressure, programmatic firings and AI-driven shifts in workforce needs, according to outplacement firm Challenger, Gray & Christmas. Technology remains a top sector for cuts amid ongoing disruptions, according to the firm’s data. In May, tech companies announced 10,598 layoffs, bringing the 2025 total to 74,716; that’s up 35% from 55,207 at the same time last year. “Tariffs, funding cuts, consumer spending, and overall economic pessimism are putting intense pressure on companies’ workforces. Companies are spending less, slowing hiring, and sending layoff notices,” Andrew Challenger, senior vice president of Challenger, Gray & Christmas, said in a statement. Uneasiness continues to weigh on tech hiring, according to CompTIA, a provider of IT training and certification products. The unemployment rate for tech jobs in May was 3.4%, roughly in line with April’s 3.5%, CompTIA data showed. The tech unemployment rate continues to sit below the national rate. CompTIA Tech sector companies added a modest 1,571 net new employees in May, analysis of the BLS jobs report by CompTIA showed. Job growth in cloud infrastructure and tech services was offset by reductions in the telecommunications sector. Tech employment across the broader economy declined by an estimated 131,000 positions. “With prior month employment gains, tech occupation employment remains in the positive for the year,” CompTIA said. “It is undoubtedly a challenging time for employers and job seekers facing uncertainty on multiple fronts,” said Tim Herbert, CompTIA’s chief research officer. “At the same time, it requires taking a measured approach given the data continues to hold up reasonably well.” One bright spot for tech hires in May was the finance and insurance industry, which collectively saw a 21% increase in new tech job postings; new tech job openings also rose by 16% in the retail sector, according to CompTIA. Even so, tech layoffs have continued as AI adoption soars and economic pressures drive a major shift toward new roles and skills in the workforce. “AI isn’t replacing jobs,” said Kye Mitchell, president of tech workforce staffing firm Experis US. “It’s fundamentally redefining how work gets done. We’re seeing AI augment skillsets and make professionals more capable, faster, and able to focus on higher-value work.” Technology only displaces jobs when about 80% of tasks can be automated — and AI isn’t close to doing that, said Mitchell. Right now, AI is enhancing skills, boosting productivity, and freeing up time for higher-value work. Hiring for AI positions and those requiring AI skills continues to grow rapidly, according to a CompTIA analysis of data from Lightcast and Stanford University study. CompTIA found that employer job postings related to AI are up 117% year-to-date year-over-year. Challenger, Gray & Christmas Skills-based hiring remains core to many employers’ recruiting strategies. About half of all tech job postings did not specify a need for a four-year academic degree, seeking instead a combination of work experience, training and industry-recognized certification, according to CompTIA’s and other data. Even so, employers are hesitant to hire. “Economic uncertainty is absolutely creating a cautious hiring environment, but it’s more complex than tariffs alone,” Mitchell said. “Our data shows employers adopting a ‘wait and watch’ stance as they monitor economic signals, with job openings down 11% year-over-year.” Still, the tech job market is adjusting as AI adoption grows. AI skill mentions in job postings fell 10% in May but are still up 10% for the year, showing steady demand, Mitchell said. The tech industry had been nearly bullet-proof from mass layoffs prior to 2022. After a hiring surge between 2020 and 2022 to meet digitization efforts as more people worked from home, the market shifted and began slashing jobs to readjust to the new reality. Tech companies such Google, Amazon, Meta (Facebook) and others laid off tens of thousands of workers  as an adjustment to over-hiring during the COVID-19 pandemic. In 2023 alone, 1,186 tech companies laid off about 262,682 staff, compared to 164,969 layoffs in 2022. In January 2024, job cuts leaped 136% over December and hit a 10-month high, according to Challenger, Gray & Christmas. While the labor market remained steady, there are signs that hiring across the board is softening. Open job postings fell 7% this year and new postings dropped 16% in the past month — the first full contraction of 2025. Year-to-date, new postings are flat compared to last year, according to Ger Doyle, ManpowerGroup’s regional president for North America. Doyle, however, was optimistic. “This is a chill, not a freeze,” he said. “Workers and employers are holding steady, awaiting clarity.” For example, he said, project management roles are up 483% year-over-year, and as the broader outlook improves, a rebound could follow, he added.  Demand for data roles is surging as companies shift from AI experiments to execution. Database architect postings are up 2,140% year-over-year, with data scientist postings up 280% — clear signs of companies building the backbone for an AI-driven future, Experis’s data showed. “This shift is also reshaping how talent enters the industry. Entry-level opportunities are becoming more limited, making it harder for recent graduates to gain a foothold,” Mitchell said. “For those looking to break in, deep analytical and technical skills are no longer optional.”
    Like
    Love
    Wow
    Angry
    Sad
    673
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com